Project

General

Profile

MAIN » History » Version 98

Laura Flores, 12/19/2023 06:40 PM

1 1 Laura Flores
h1. MAIN
2 2 Laura Flores
3 98 Laura Flores
Summaries are ordered latest --> oldest.
4
5 97 Kamoltat (Junior) Sirivadhna
h3. https://trello.com/c/k9RNURve/1897-wip-yuri3-testing-2023-12-07-0727-old-wip-yuri3-testing-wip-neorados-learning-from-experience
6
7
https://pulpito.ceph.com/cbodley-2023-12-15_15:26:38-rados-wip-yuri3-testing-2023-12-07-0727-distro-default-smithi/
8
9
10
7493218  - https://tracker.ceph.com/issues/63783 (known issue)
11
7493242
7493228
12
——————

13
7493219 - https://tracker.ceph.com/issues/6177 (known issue)
14
7493224 
15
7493230
16
7493236
17
7493237
18
7493243

19
———————

20
7493221 - https://tracker.ceph.com/issues/59142 (known issue)

21
———————
22
7493223 - https://tracker.ceph.com/issues/59196 (known issue)
23
———————

24
7493229 -  https://tracker.ceph.com/issues/63748 (known issue)
25
7493244
26
———————
27 35 Laura Flores
7493232 - https://tracker.ceph.com/issues/63785 (known issue)
28 81 Nitzan Mordechai
29 96 Nitzan Mordechai
h3. https://trello.com/c/ZgZGBobg/1902-wip-yuri8-testing-2023-12-11-1101-old-wip-yuri8-testing-2023-12-06-1425
30
31
https://pulpito.ceph.com/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/
32
33
Unrelated failures:
34
1. https://tracker.ceph.com/issues/63748
35
2. https://tracker.ceph.com/issues/61774
36
3. https://tracker.ceph.com/issues/63783
37
4. https://tracker.ceph.com/issues/59380
38
5. https://tracker.ceph.com/issues/59142
39
6. https://tracker.ceph.com/issues/59196
40
41
42
Details:
43
1. ['7487680', '7487836'] - qa/workunits/post-file.sh: Couldn't create directory
44
2. ['7487541', '7487610', '7487750', '7487751', '7487683', '7487822'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
45
3. ['7487677', '7487816', '7487532'] - mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name'
46
4. ['7487804', '7487806', '7487647'] - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
47
5. ['7487724', '7487568'] - mgr/dashboard: fix e2e for dashboard v3
48
6. ['7487579', '7487739'] - cephtest bash -c ceph_test_lazy_omap_stats
49
50
51 94 Laura Flores
h3. https://trello.com/c/OeXUIG19/1898-wip-yuri2-testing-2023-12-06-1239-old-wip-yuri2-testing-2023-12-04-0902
52
53
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-12-06-1239
54
55
Failures, unrelated:
56
    1. https://tracker.ceph.com/issues/63748
57
    2. https://tracker.ceph.com/issues/56788
58
    3. https://tracker.ceph.com/issues/61774
59
    4. https://tracker.ceph.com/issues/63783
60
    5. https://tracker.ceph.com/issues/63784 -- new tracker
61
    6. https://tracker.ceph.com/issues/59196
62
    7. https://tracker.ceph.com/issues/63785 -- new tracker
63
    8. https://tracker.ceph.com/issues/59380
64
    9. https://tracker.ceph.com/issues/63788
65
    10. https://tracker.ceph.com/issues/63778
66
    11. https://tracker.ceph.com/issues/63789
67
68
Details:
69
    1. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
70
    2. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore
71
    3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
72
    4. mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name' - Ceph - RGW
73
    5. qa/standalone/mon/mkfs.sh:'mkfs/a' already exists and is not empty: monitor may already exist - Ceph - RADOS
74
    6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
75
    7. cephadm/test_adoption.sh: service not found - Ceph - Orchestrator
76
    8. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
77
    9. Cephadm tests fail from "nothing provides lua-devel needed by ceph-2:19.0.0-44.g2d90d175.el8.x86_64" - Ceph - RGW
78
    10. Upgrade: failed due to an unexpected exception - Ceph - Orchestrator
79
    11. LibRadosIoEC test failure - Ceph - RADOS
80
81 93 Laura Flores
h3. https://trello.com/c/wK3QrkV2/1901-wip-yuri-testing-2023-12-06-1240
82
83
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-12-06-1240
84
85
Failures, unrelated:
86
    1. https://tracker.ceph.com/issues/63783
87
    2. https://tracker.ceph.com/issues/61774
88
    3. https://tracker.ceph.com/issues/59196
89
    4. https://tracker.ceph.com/issues/59380
90
    5. https://tracker.ceph.com/issues/59142
91
    6. https://tracker.ceph.com/issues/63748
92
    7. https://tracker.ceph.com/issues/63785 -- new tracker
93
    8. https://tracker.ceph.com/issues/63786 -- new tracker
94
95
Details:
96
    1. mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name' - Ceph - RGW
97
    2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
98
    3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
99
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
100
    5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
101
    6. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
102
    7. cephadm/test_adoption.sh: service not found - Ceph - Orchestrator
103
    8. rados_cls_all: TestCls2PCQueue.MultiProducer hangs - Ceph - RGW
104 82 Prashant D
105 83 Ronen Friedman
h3. https://trello.com/c/tUEWtLfq/1892-wip-yuri7-testing-2023-11-17-0819
106
107
https://pulpito.ceph.com/yuriw-2023-11-26_21:30:23-rados-wip-yuri7-testing-2023-11-17-0819-distro-default-smithi/
108
109 84 Ronen Friedman
Failures being analyzed:
110 95 Ronen Friedman
1. '7467376' - ?
111
2. 
112 1 Laura Flores
113 84 Ronen Friedman
Failures, unrelated:
114 87 Ronen Friedman
1. ['7467380','7467367','7467378'] - timeout on test_cls_2pc_queue ->> https://tracker.ceph.com/issues/62449
115
2. ['7467370','7467374','7467387','7467375'] - Valgrind: mon (Leak_StillReachable) ->> https://tracker.ceph.com/issues/61774
116
3. ['7467388','7467373'] - ceph_test_lazy_omap_stats ->> https://tracker.ceph.com/issues/59196
117 1 Laura Flores
4. ['7467385','7467372'] - failure in e2e-spec ->> https://tracker.ceph.com/issues/48406
118
5. ['7467371'] - unrelated. Test infra issues.
119 88 Ronen Friedman
6. ['7467379','7467369'] - RGW 'realm_name' ->> https://tracker.ceph.com/issues/63499
120 95 Ronen Friedman
7. ['7467377','7467366'] - seem to be a disk space issue.
121
8. ['7467381'] - test environment
122
123 83 Ronen Friedman
124
125 82 Prashant D
h3. https://pulpito.ceph.com/yuriw-2023-10-24_00:11:03-rados-wip-yuri2-testing-2023-10-23-0917-distro-default-smithi/
126
127
Failures, unrelated:
128
1. https://tracker.ceph.com/issues/59196
129
2. https://tracker.ceph.com/issues/61774
130
3. https://tracker.ceph.com/issues/49961
131
4. https://tracker.ceph.com/issues/62449
132
5. https://tracker.ceph.com/issues/48406
133
6. https://tracker.ceph.com/issues/63121
134
7. https://tracker.ceph.com/issues/47838
135
8. https://tracker.ceph.com/issues/62777
136
9. https://tracker.ceph.com/issues/54372
137
10. https://tracker.ceph.com/issues/63500      <--------- New tracker
138
139
Details:
140
1. ['7435483','7435733'] - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
141
2. ['7435516','7435570','7435765','7435905'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
142
3. ['7435520'] - scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS
143
4. ['7435568','7435875'] - test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
144
5. ['7435713','7436026'] - cephadm/test_dashboard_e2e.sh: error when testing orchestrator/04-osds.e2e-spec.ts - Ceph - Mgr - Dashboard
145
6. ['7435741'] - objectstore/KeyValueDB/KVTest.RocksDB_estimate_size tests failing - Ceph - RADOS
146
7. ['7435855'] - mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
147
8. ['7435767', '7435636','7435971'] - rados/valgrind-leaks: expected valgrind issues and found none - Ceph - RADOS
148
9. ['7435999'] - No module named 'tasks' - Infrastructure
149
10. ['7435995'] - No module named 'tasks.nvme_loop' - Infrastructure
150
151 80 Nitzan Mordechai
h3. https://pulpito.ceph.com/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/
152
153
Failures, unrelated:
154
155
1. https://tracker.ceph.com/issues/59380 
156
2. https://tracker.ceph.com/issues/59142
157
3. https://tracker.ceph.com/issues/61774
158
4. https://tracker.ceph.com/issues/53767
159
5. https://tracker.ceph.com/issues/59196
160
6. https://tracker.ceph.com/issues/62535
161
162
Details:
163
1. ['7441165', '7441319'] - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
164
2. ['7441240', '7441396'] - mgr/dashboard: fix e2e for dashboard v3
165
3. ['7441266', '7441336', '7441129', '7441267', '7441201'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
166
4. ['7441167', '7441321'] - qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
167
5. ['7441250', '7441096'] - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
168
6. ['7441374'] - cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator
169 35 Laura Flores
170 78 Prashant D
h3. https://pulpito.ceph.com/yuriw-2023-10-27_19:03:28-rados-wip-yuri8-testing-2023-10-27-0825-distro-default-smithi/
171
172
Failures, unrelated:
173
1. https://tracker.ceph.com/issues/59196
174
2. https://tracker.ceph.com/issues/61774
175
3. https://tracker.ceph.com/issues/62449
176
4. https://tracker.ceph.com/issues/59192
177
5. https://tracker.ceph.com/issues/48406
178
6. https://tracker.ceph.com/issues/62776
179
180
Details:
181
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
182
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
183
3. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
184
4. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
185
5. cephadm/test_dashboard_e2e.sh: error when testing orchestrator/04-osds.e2e-spec.ts - Ceph - Mgr - Dashboard
186
6. rados/basic: 2 pools do not have an application enabled - Ceph - RADOS
187
188 77 Kamoltat (Junior) Sirivadhna
h3. Not in Trello but still a rados suite
189
https://pulpito.ceph.com/ksirivad-2023-10-13_01:58:36-rados-wip-ksirivad-fix-63183-distro-default-smithi/
190
191
Failures, unrelated:
192
193
7423809 - https://tracker.ceph.com/issues/63198 <<-- New Tracker
194
7423821, 7423972 - https://tracker.ceph.com/issues/59142 - mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
195
7423826, 7423979 - https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
196
7423849 - https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
197
7423875 - Failed to get package from Shaman (infra failure)
198
7423896, 7424047 - https://tracker.ceph.com/issues/62449 - test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
199
7423913 - https://tracker.ceph.com/issues/62119 - timeout on reserving replica
200
7423918 - https://tracker.ceph.com/issues/61787 - Command "ceph --cluster ceph osd dump --format=json" times out when killing OSD
201
7423980 - https://tracker.ceph.com/issues/62557 - Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning
202
7423982 - https://tracker.ceph.com/issues/63121 - KeyValueDB/KVTest.RocksDB_estimate_size tests failing
203
7423984 - tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
204
7423995 - https://tracker.ceph.com/issues/62777 - rados/valgrind-leaks: expected valgrind issues and found none
205
7424052 - https://tracker.ceph.com/issues/55809 - "Leak_IndirectlyLost" valgrind report on mon.c
206
207
208 76 Laura Flores
h3. https://trello.com/c/PuCOnhYL/1841-wip-yuri5-testing-2023-10-02-1105-old-wip-yuri5-testing-2023-09-27-0959
209
210
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-10-02-1105
211
212
Failures, unrelated:
213
    1. https://tracker.ceph.com/issues/52624
214
    2. https://tracker.ceph.com/issues/59380
215
    3. https://tracker.ceph.com/issues/61774
216
    4. https://tracker.ceph.com/issues/59196
217
    5. https://tracker.ceph.com/issues/62449
218
    6. https://tracker.ceph.com/issues/59142
219
220
Details:
221
    1. qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - Ceph - RADOS
222
    2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
223
    3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
224
    4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
225
    5. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
226
    6. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
227
228 74 Laura Flores
h3. https://trello.com/c/tGZO7hNK/1832-wip-yuri6-testing-2023-08-28-1308
229
230
https://pulpito.ceph.com/lflores-2023-09-06_18:19:12-rados-wip-yuri6-testing-2023-08-28-1308-distro-default-smithi/
231
232
Failures, unrelated:
233 75 Laura Flores
# https://tracker.ceph.com/issues/59142
234
# https://tracker.ceph.com/issues/59196
235
# https://tracker.ceph.com/issues/55347
236
# https://tracker.ceph.com/issues/62084
237
# https://tracker.ceph.com/issues/62975 <<<---- New Tracker
238
# https://tracker.ceph.com/issues/62449
239
# https://tracker.ceph.com/issues/61774
240
# https://tracker.ceph.com/issues/53345
241 74 Laura Flores
242
Details:
243 75 Laura Flores
# mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
244
# ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
245
# SELinux Denials during cephadm/workunits/test_cephadm
246
# task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
247
# site-packages/paramiko/channel.py: OSError: Socket is closed - Infrastructure
248
# test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
249
# centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
250
# Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
251 74 Laura Flores
252 73 Laura Flores
253
h3. Individual testing for https://github.com/ceph/ceph/pull/53344
254
255
http://pulpito.front.sepia.ceph.com/?branch=wip-lflores-testing-2-2023-09-08-1755
256
257
Failures, unrelated:
258
    1. https://tracker.ceph.com/issues/57628
259
    2. https://tracker.ceph.com/issues/62449
260
    3. https://tracker.ceph.com/issues/61774
261
    4. https://tracker.ceph.com/issues/53345
262
    5. https://tracker.ceph.com/issues/59142
263
    6. https://tracker.ceph.com/issues/59380
264
265
Details:
266
    1. osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0) - Ceph - RADOS
267
    2. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
268
    3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
269
    4. Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
270
    5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
271
    6. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
272 72 Laura Flores
273
h3. https://trello.com/c/JxeRJYse/1822-wip-yuri4-testing-2023-08-10-1739
274
275
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-08-10-1739
276
277
Failures, unrelated:
278
    1. https://tracker.ceph.com/issues/62084
279
    2. https://tracker.ceph.com/issues/59192
280
    3. https://tracker.ceph.com/issues/61774
281
    4. https://tracker.ceph.com/issues/62449
282
    5. https://tracker.ceph.com/issues/62777
283
    6. https://tracker.ceph.com/issues/59196
284
    7. https://tracker.ceph.com/issues/59380
285
    8. https://tracker.ceph.com/issues/58946
286
287
Details:
288
    1. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - CephFS
289
    2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
290
    3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
291
    4. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
292
    5. rados/valgrind-leaks: expected valgrind issues and found none - Ceph - RADOS
293
    6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
294
    7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
295
    8. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
296 71 Laura Flores
297
h3. https://trello.com/c/Wt1KTViI/1830-wip-yuri-testing-2023-08-25-0809
298
299
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-08-25-0809
300
301
Failures, unrelated:
302
    1. https://tracker.ceph.com/issues/62776
303
    2. https://tracker.ceph.com/issues/61774
304
    3. https://tracker.ceph.com/issues/62084
305
    4. https://tracker.ceph.com/issues/58946
306
    5. https://tracker.ceph.com/issues/59380
307
    6. https://tracker.ceph.com/issues/62449
308
    7. https://tracker.ceph.com/issues/59196
309
310
Details:
311
    1. rados/basic: 2 pools do not have an application enabled - Ceph - RADOS
312
    2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
313
    3. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
314
    4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
315
    5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
316
    6. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
317
    7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
318 70 Laura Flores
319
h3. https://trello.com/c/Fllj7bVM/1833-wip-yuri8-testing-2023-08-28-1340
320
321
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-08-28-1340
322
323
Failures, unrelated:
324
    1. https://tracker.ceph.com/issues/62728
325
    2. https://tracker.ceph.com/issues/62084
326
    3. https://tracker.ceph.com/issues/59142
327
    4. https://tracker.ceph.com/issues/59380
328
    5. https://tracker.ceph.com/issues/62449
329
    6. https://tracker.ceph.com/issues/61774
330
    7. https://tracker.ceph.com/issues/59196
331
332
Details:
333
    1. Host key for server xxx does not match - Infrastructure
334
    2. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
335
    3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
336
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
337
    5. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
338
    6. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
339
    7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
340 65 Kamoltat (Junior) Sirivadhna
341
h3. https://trello.com/c/pMEWaauy/1825-wip-yuri3-testing-2023-08-15-0955
342
343
Failures, unrelated:
344 66 Kamoltat (Junior) Sirivadhna
345
7370238 - https://tracker.ceph.com/issues/59196
346
7370242, 7370322 - https://tracker.ceph.com/issues/62482
347
7370245,7370298 - https://tracker.ceph.com/issues/62084
348
7370250, 7370317 - https://tracker.ceph.com/issues/53767
349 69 Kamoltat (Junior) Sirivadhna
7370274, 7370343 - https://tracker.ceph.com/issues/61519
350 65 Kamoltat (Junior) Sirivadhna
7370263 - https://tracker.ceph.com/issues/62713 (New tracker)
351 1 Laura Flores
352 68 Kamoltat (Junior) Sirivadhna
7370285, 7370286 - https://tracker.ceph.com/issues/61774
353
354
DEAD jobs, unrelated:
355
356 65 Kamoltat (Junior) Sirivadhna
7370249, 7370316 https://tracker.ceph.com/issues/59380
357 63 Laura Flores
358
h3. https://trello.com/c/i87i4GUf/1826-wip-yuri10-testing-2023-08-17-1444-old-wip-yuri10-testing-2023-08-15-1601-old-wip-yuri10-testing-2023-08-15-1009
359 64 Laura Flores
360 63 Laura Flores
Failures, unrelated:
361
7376678, 7376832 - https://tracker.ceph.com/issues/61786
362
7376687 - https://tracker.ceph.com/issues/59196
363
7376699 - https://tracker.ceph.com/issues/55347
364
7376739 - https://tracker.ceph.com/issues/61229
365
7376742, 7376887 - https://tracker.ceph.com/issues/62084
366
7376758, 7376914 - https://tracker.ceph.com/issues/62449
367
7376756, 7376912 - https://tracker.ceph.com/issues/59380
368
https://tracker.ceph.com/issues/61774
369 62 Laura Flores
370
h3. https://trello.com/c/MEs20HAJ/1828-wip-yuri11-testing-2023-08-17-0823
371
372
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-08-17-0823
373
374
Failures, unrelated:
375
    1. https://tracker.ceph.com/issues/47838
376
    2. https://tracker.ceph.com/issues/61774
377
    3. https://tracker.ceph.com/issues/59196
378
    4. https://tracker.ceph.com/issues/59380
379
    5. https://tracker.ceph.com/issues/58946
380
    6. https://tracker.ceph.com/issues/62535 -- new tracker
381
    7. https://tracker.ceph.com/issues/62084
382
    8. https://tracker.ceph.com/issues/62449
383
384
Details:
385
    1. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
386
    2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
387
    3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
388
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
389
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
390
    6. cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator
391
    7. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
392
    8. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
393 61 Laura Flores
394
h3. https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-20-0727
395
396
Failures, unrelated:
397
    1. https://tracker.ceph.com/issues/62084
398
    2. https://tracker.ceph.com/issues/61161
399
    3. https://tracker.ceph.com/issues/61774
400
    4. https://tracker.ceph.com/issues/62167
401
    5. https://tracker.ceph.com/issues/62212
402
    6. https://tracker.ceph.com/issues/58946
403
    7. https://tracker.ceph.com/issues/59196
404
    8. https://tracker.ceph.com/issues/59380
405
406
Details:
407
    1. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
408
    2. Creating volume group 'vg_nvme' failed - Ceph - Ceph-Ansible
409
    3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
410
    4. FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
411
    5. cannot create directory ‘/home/ubuntu/cephtest/archive/audit’: No such file or directory - Tools - Teuthology
412
    6. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
413
    7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
414
    8. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
415 57 Ronen Friedman
416 55 Ronen Friedman
h3. WIP https://trello.com/c/1JlLNnGN/1812-wip-yuri5-testing-2023-07-24-0814
417
418
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-07-24-0814
419 1 Laura Flores
420 57 Ronen Friedman
Failures:
421
    1. 7350969, 7351069 -> https://tracker.ceph.com/issues/59192
422 59 Ronen Friedman
    2. 7350972, 7351023 -> https://tracker.ceph.com/issues/62073  
423 58 Ronen Friedman
    3. 7350977, 7351059 -> https://tracker.ceph.com/issues/53767 ? to verify
424 59 Ronen Friedman
    4. 7350983, 7351016, 7351019 -> https://tracker.ceph.com/issues/61774 (valgrind)
425 60 Ronen Friedman
    5. 7351004, 7351090 -> https://tracker.ceph.com/issues/61519
426 1 Laura Flores
    6. 7351084 -> (selinux)
427 59 Ronen Friedman
428 60 Ronen Friedman
Dead:
429 56 Ronen Friedman
    7. 7350974, 7351053 -> no relevant info.
430
431 1 Laura Flores
Details:
432
    1.  cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
433
    2.  AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
434 58 Ronen Friedman
    3.  e2e - to verify
435 59 Ronen Friedman
    4.  valgrind issues. Won't be analyzed as irrelevant to this PR.
436
    5.  mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
437 58 Ronen Friedman
    6.  selinux issue: 
438 55 Ronen Friedman
439
440 53 Kamoltat (Junior) Sirivadhna
441
h3. https://trello.com/c/GM3omhGs/1803-wip-yuri5-testing-2023-07-14-0757-old-wip-yuri5-testing-2023-07-12-1122
442
443
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-07-14-0757
444 54 Kamoltat (Junior) Sirivadhna
445
Failures:
446
    1. 7341711 -> https://tracker.ceph.com/issues/62073 -> AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
447
    2. 7341716 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'
448
    3. 7341717 -> https://tracker.ceph.com/issues/62073 -> AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
449
    4. 7341720 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'
450
451
Dead:
452
    1. 7341712 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
453
    2. 7341719 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
454 52 Laura Flores
455
h3. https://trello.com/c/6no1SSqS/1762-wip-yuri2-testing-2023-07-17-0957-old-wip-yuri2-testing-2023-07-15-0802-old-wip-yuri2-testing-2023-07-13-1236-old-wip-yuri2-test
456
457
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2023-07-15-0802
458
459
Failures:
460
    1. https://tracker.ceph.com/issues/61774 -- valgrind leak in centos 9; not major outside of qa but needs to be suppressed
461
    2. https://tracker.ceph.com/issues/58946
462
    3. https://tracker.ceph.com/issues/59192
463
    4. https://tracker.ceph.com/issues/59380
464
    5. https://tracker.ceph.com/issues/61385
465
    6. https://tracker.ceph.com/issues/62073
466
467
Details:
468
    1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
469
    2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
470
    3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
471
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
472
    5. TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected: true # in actual: false" - Ceph - RADOS
473
    6. AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
474 51 Laura Flores
475
h3. https://trello.com/c/Vbn2qwgo/1793-wip-yuri-testing-2023-07-14-1641-old-wip-yuri-testing-2023-07-12-1332-old-wip-yuri-testing-2023-07-12-1140-old-wip-yuri-testing
476
477
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-14-1641
478
479
Failures, unrelated:
480
    1. https://tracker.ceph.com/issues/58946
481
    2. https://tracker.ceph.com/issues/59380
482
    3. https://tracker.ceph.com/issues/62073
483
484
Details:
485
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
486
    2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
487
    3. AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
488 50 Kamoltat (Junior) Sirivadhna
489
h3. https://trello.com/c/7y6uj4bo/1800-wip-yuri6-testing-2023-07-10-0816
490
491
Failures:
492
493
7332480 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
494
7332558 -> https://tracker.ceph.com/issues/57302 -> Test failure: test_create_access_permissions (tasks.mgr.dashboard.test_pool.PoolTest)
495
7332565 -> https://tracker.ceph.com/issues/57754 -> test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
496
7332612 -> https://tracker.ceph.com/issues/57754 -> test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
497
7332613 -> https://tracker.ceph.com/issues/55347 -> SELinux Denials during cephadm/workunits/test_cephadm
498
7332636 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
499
500
Deads:
501
502
7332357 -> https://tracker.ceph.com/issues/61164 -> Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds
503
7332405 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
504
7332559 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
505 49 Laura Flores
506
h3. https://trello.com/c/BHAY6fGO/1801-wip-yuri10-testing-2023-07-10-1345
507
508
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-07-10-1345
509
510
Failures, unrelated:
511
    1. https://tracker.ceph.com/issues/58946
512
    2. https://tracker.ceph.com/issues/50242
513
    3. https://tracker.ceph.com/issues/55347
514
    4. https://tracker.ceph.com/issues/59380
515
516
Details :
517
    1 cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
518
    2. test_repair_corrupted_obj fails with assert not inconsistent
519
    3. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
520
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
521 48 Laura Flores
522
h3. https://trello.com/c/bn3IMWEB/1783-wip-yuri7-testing-2023-06-23-1022-old-wip-yuri7-testing-2023-06-12-1220-old-wip-yuri7-testing-2023-06-09-1607
523
524
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-06-23-1022
525
526
Failures, unrelated:
527
    1. https://tracker.ceph.com/issues/59380
528
    2. https://tracker.ceph.com/issues/57754
529
    3. https://tracker.ceph.com/issues/59196
530
    4. https://tracker.ceph.com/issues/59057
531
    5. https://tracker.ceph.com/issues/57754
532
    6. https://tracker.ceph.com/issues/55347
533
    7. https://tracker.ceph.com/issues/58946
534
    8. https://tracker.ceph.com/issues/61951 -- new tracker
535
536
Details:
537
    1. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
538
    2. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
539
    3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
540
    4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
541
    5. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
542
    6. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
543
    7. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
544
    8. cephadm: OrchestratorError: Must set public_network config option or specify a CIDR network, ceph addrvec, or plain IP - Ceph - Orchestrator
545 47 Laura Flores
546
h3. https://trello.com/c/BVxlgRvT/1782-wip-yuri5-testing-2023-06-28-1515-old-wip-yuri5-testing-2023-06-21-0750-old-wip-yuri5-testing-2023-06-16-1012-old-wip-yuri5-test
547
548
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2023-06-28-1515
549
550
Failures, unrelated:
551
    1. https://tracker.ceph.com/issues/58946
552
    2. https://tracker.ceph.com/issues/59380
553
    3. https://tracker.ceph.com/issues/55347
554
    4. https://tracker.ceph.com/issues/57754
555
    5. https://tracker.ceph.com/issues/59057
556
    6. https://tracker.ceph.com/issues/61897
557
    7. https://tracker.ceph.com/issues/61940 -- new tracker
558
559
Details:
560
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
561
    2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
562
    3. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
563
    4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
564
    5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
565
    6. qa: rados:mgr fails with MDS_CLIENTS_LAGGY - Ceph - CephFS
566 43 Rishabh Dave
    7. "test_cephfs_mirror" fails from stray cephadm daemon - Ceph - Orchestrator
567 45 Rishabh Dave
568 44 Rishabh Dave
h3. 2023 Jun 23
569
570
https://pulpito.ceph.com/rishabh-2023-06-21_22:15:54-rados-wip-rishabh-improvements-authmon-distro-default-smithi/
571
https://pulpito.ceph.com/rishabh-2023-06-22_10:54:41-rados-wip-rishabh-improvements-authmon-distro-default-smithi/
572
573
* https://tracker.ceph.com/issues/58946
574
  cephadm: KeyError: 'osdspec_affinity'
575
* https://tracker.ceph.com/issues/57754
576
  test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist
577
* https://tracker.ceph.com/issues/61784
578 46 Laura Flores
  test_envlibrados_for_rocksdb.sh: '~ubuntu-toolchain-r' user or team does not exist
579 44 Rishabh Dave
* https://tracker.ceph.com/issues/61832
580
  osd-scrub-dump.sh: ERROR: Extra scrubs after test completion...not expected
581 42 Laura Flores
582
h3. https://trello.com/c/CcKXkHLe/1789-wip-yuri3-testing-2023-06-19-1518
583
584
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2023-06-19-1518
585
586
Failures, unrelated:
587
    1. https://tracker.ceph.com/issues/59057
588
    2. https://tracker.ceph.com/issues/59380
589
    3. https://tracker.ceph.com/issues/58946
590
591
Details:
592
    1. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
593
    2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
594
    3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
595 41 Laura Flores
596
h3. https://trello.com/c/Zbp7w1yE/1770-wip-yuri10-testing-2023-06-02-1406-old-wip-yuri10-testing-2023-05-30-1244
597
598
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2023-06-02-1406
599
600
Failures, unrelated:
601
    1. https://tracker.ceph.com/issues/46877
602
    2. https://tracker.ceph.com/issues/59057
603
    3. https://tracker.ceph.com/issues/61225
604
    4. https://tracker.ceph.com/issues/55347
605
    5. https://tracker.ceph.com/issues/59380
606
607
Details:
608
    1. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none - Ceph - RADOS
609
    2. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
610
    3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
611
    4. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
612
    5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
613 40 Laura Flores
614
h3. https://trello.com/c/lyHYQLgL/1771-wip-yuri11-testing-2023-05-30-1325
615
616
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-05-30-1325
617
618
Failures, unrelated:
619
    1. https://tracker.ceph.com/issues/59678
620
    2. https://tracker.ceph.com/issues/55347
621
    3. https://tracker.ceph.com/issues/59380
622
    4. https://tracker.ceph.com/issues/61519
623
    5. https://tracker.ceph.com/issues/61225
624
625
Details:
626
    1. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Infrastructure
627
    2. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
628
    3. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
629
    4. mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
630
    5. TestClsRbd.mirror_snapshot failure - Ceph - RBD
631 39 Laura Flores
632
h3. https://trello.com/c/8FwhCHxc/1774-wip-yuri-testing-2023-06-01-0746
633
634
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-06-01-0746
635
636
Failures, unrelated:
637
    1. https://tracker.ceph.com/issues/59380
638
    2. https://tracker.ceph.com/issues/61578 -- new tracker
639
    3. https://tracker.ceph.com/issues/59192
640
    4. https://tracker.ceph.com/issues/61225
641
    5. https://tracker.ceph.com/issues/59057
642
643
Details:
644
    1. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
645
    2. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard
646
    3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
647
    4. TestClsRbd.mirror_snapshot failure - Ceph - RBD
648
    5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
649 37 Laura Flores
650
h3. https://trello.com/c/g4OvqEZx/1766-wip-yuri-testing-2023-05-26-1204
651
652
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-05-26-1204
653
654 38 Laura Flores
Failures, unrelated:
655
1. https://tracker.ceph.com/issues/61386
656 37 Laura Flores
2. https://tracker.ceph.com/issues/61497 -- new tracker
657
3. https://tracker.ceph.com/issues/61225
658
4. https://tracker.ceph.com/issues/58560
659
5. https://tracker.ceph.com/issues/59057
660
6. https://tracker.ceph.com/issues/55347
661
662 38 Laura Flores
Details:
663 37 Laura Flores
1. TEST_recovery_scrub_2: TEST FAILED WITH 1 ERRORS - Ceph - RADOS
664
2. ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported - Dashboard
665
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
666
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
667
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
668
6. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
669
670
Remaining 4 Rook test failures are due to repo http://apt.kubernetes.io  is not signed
671
  - The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
672 36 Laura Flores
673
h3. https://trello.com/c/1LQJnuRh/1759-wip-yuri8-testing-2023-05-23-0802
674
675
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-23-0802
676
677
Failures, unrelated:
678
    1. https://tracker.ceph.com/issues/61225
679
    2. https://tracker.ceph.com/issues/61402
680
    3. https://tracker.ceph.com/issues/59678
681
    4. https://tracker.ceph.com/issues/59057
682
    5. https://tracker.ceph.com/issues/59380
683
684
Details:
685
    1. TestClsRbd.mirror_snapshot failure - Ceph - RBD
686
    2. test_dashboard_e2e.sh: AssertionError: Timed out retrying after 120000ms: Expected to find content: '/^smithi160$/' within the selector: 'datatable-body-row datatable-body-cell:nth-child(2)' but never did. - Ceph - Mgr - Dashboard
687
    3. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
688
    4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
689
    5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
690 34 Laura Flores
691
h3. https://trello.com/c/J04nAx3y/1756-wip-yuri11-testing-2023-05-19-0836
692
693
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-05-19-0836
694
695
Failures, unrelated:
696
    1. https://tracker.ceph.com/issues/58585
697
    2. https://tracker.ceph.com/issues/61256
698
    3. https://tracker.ceph.com/issues/59380
699
    4. https://tracker.ceph.com/issues/58946
700
    5. https://tracker.ceph.com/issues/61225
701
    6. https://tracker.ceph.com/issues/59057
702
    7. https://tracker.ceph.com/issues/58560
703
    8. https://tracker.ceph.com/issues/57755
704
    9. https://tracker.ceph.com/issues/55347
705
706
Details:
707
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
708
    2. Upgrade test fails after prometheus_receiver connection is refused - Ceph - Orchestrator
709
    3. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
710
    4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
711
    5. TestClsRbd.mirror_snapshot failure - Ceph - RBD
712
    6. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
713
    7. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
714
    8. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
715
    9. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
716 33 Laura Flores
717
h3. https://trello.com/c/nvyHvlZ4/1745-wip-yuri8-testing-2023-05-10-1402
718
719
There was an RGW multisite test failure, but it turned out to be related
720
to an unmerged PR in the batch, which was dropped.
721
722
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-10-1402
723
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-18-1232
724
725
Failures, unrelated:
726
    1. https://tracker.ceph.com/issues/58560
727
    2. https://tracker.ceph.com/issues/59196
728
    3. https://tracker.ceph.com/issues/61225
729
    4. https://tracker.ceph.com/issues/58585
730
    5. https://tracker.ceph.com/issues/58946
731
    6. https://tracker.ceph.com/issues/61256 -- new tracker
732
    7. https://tracker.ceph.com/issues/59380
733
    8. https://tracker.ceph.com/issues/59333
734
735
Details:
736
    1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
737
    2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
738
    3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
739
    4. rook: failed to pull kubelet image - Ceph - Orchestrator
740
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
741
    6. Upgrade test fails after prometheus_receiver connection is refused - Ceph - Orchestrator
742
    7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
743
    8. PgScrubber: timeout on reserving replicas - Ceph - RADOS
744 32 Laura Flores
745
h3. https://trello.com/c/lM1xjBe0/1744-wip-yuri-testing-2023-05-10-0917
746
747
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-05-10-0917
748
749
Failures, unrelated:
750
    1. https://tracker.ceph.com/issues/61225
751
    2. https://tracker.ceph.com/issues/58585
752
    3. https://tracker.ceph.com/issues/44889
753
    4. https://tracker.ceph.com/issues/59193
754
    5. https://tracker.ceph.com/issues/58946
755
    6. https://tracker.ceph.com/issues/58560
756
    7. https://tracker.ceph.com/issues/49287
757
    8. https://tracker.ceph.com/issues/55347
758
    9. https://tracker.ceph.com/issues/59380
759
    10. https://tracker.ceph.com/issues/59192
760
    11. https://tracker.ceph.com/issues/61261 -- new tracker
761
    12. https://tracker.ceph.com/issues/61262 -- new tracker
762
    13. https://tracker.ceph.com/issues/46877
763
764
Details:
765
    1. TestClsRbd.mirror_snapshot failure - Ceph - RBD
766
    2. rook: failed to pull kubelet image - Ceph - Orchestrator
767
    3. workunit does not respect suite_branch when it comes to checkout sha1 on remote host - Tools - Teuthology
768
    4. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
769
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
770
    6. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
771
    7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
772
    8. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
773
    9. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
774
    10. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
775
    11. test_cephadm.sh: missing container image causes job to fail - Ceph - Orchestrator
776
    12: Cephadm task times out when waiting for osds to come up - Ceph - Orchestrator
777
    13. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none - Ceph - RADOS
778 31 Laura Flores
779
h3. https://trello.com/c/1EFSeXDn/1752-wip-yuri10-testing-2023-05-16-1243
780
781
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-05-16-1243
782
783
Also one failure related to http://archive.ubuntu.com/ubuntu that seems transient.
784
785
Failures, unrelated:
786
    1. https://tracker.ceph.com/issues/58585
787
    2. https://tracker.ceph.com/issues/44889
788
    3. https://tracker.ceph.com/issues/59678
789
    4. https://tracker.ceph.com/issues/55347
790
    5. https://tracker.ceph.com/issues/58946
791
    6. https://tracker.ceph.com/issues/61225 -- new tracker
792
    7. https://tracker.ceph.com/issues/59380
793
    8. https://tracker.ceph.com/issues/49888
794
    9. https://tracker.ceph.com/issues/59192
795
796
Details:
797
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
798
    2. workunit does not respect suite_branch when it comes to checkout sha1 on remote host - Tools - Teuthology
799
    3. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
800
    4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
801
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
802
    6. TestClsRbd.mirror_snapshot failure - Ceph - RBD
803
    7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
804 30 Laura Flores
    8. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
805
    9. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
806 29 Laura Flores
807
h3. https://trello.com/c/AjBYBGYC/1738-wip-yuri7-testing-2023-04-19-1343-old-wip-yuri7-testing-2023-04-19-0721-old-wip-yuri7-testing-2023-04-18-0818
808
809
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-04-19-1343
810
811
Failures, unrelated:
812
    1. https://tracker.ceph.com/issues/57755
813
    2. https://tracker.ceph.com/issues/58946
814
    3. https://tracker.ceph.com/issues/49888
815
    4. https://tracker.ceph.com/issues/59380
816
    5. https://tracker.ceph.com/issues/57754
817
    6. https://tracker.ceph.com/issues/55347
818
    7. https://tracker.ceph.com/issues/49287
819
820
Details:
821
    1. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
822
    2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
823
    3. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
824
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
825
    5. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
826
    6. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
827
    7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
828
829
h3. https://trello.com/c/YN2r7OyK/1740-wip-yuri3-testing-2023-04-25-1147
830
831
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-04-25-1147
832
833
Failures, unrelated:
834
    1. https://tracker.ceph.com/issues/59049
835
    2. https://tracker.ceph.com/issues/59192
836
    3. https://tracker.ceph.com/issues/59335
837
    4. https://tracker.ceph.com/issues/59380
838
    5. https://tracker.ceph.com/issues/58946
839
    6. https://tracker.ceph.com/issues/50371
840
    7. https://tracker.ceph.com/issues/59057
841
    8. https://tracker.ceph.com/issues/53345
842
843
Details:
844
    1. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
845
    2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
846
    3. Found coredumps on smithi related to sqlite3Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
847
    4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
848
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
849
    6. Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - Ceph - RADOS
850
    7. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
851
    8. Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
852
853
h3. https://trello.com/c/YhSdHR96/1728-wip-yuri2-testing-2023-03-30-0826
854
855
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-03-30-0826
856
857
Failures, unrelated:
858
    1. https://tracker.ceph.com/issues/51964
859
    2. https://tracker.ceph.com/issues/58946
860
    3. https://tracker.ceph.com/issues/58758
861
    4. https://tracker.ceph.com/issues/58585
862
    5. https://tracker.ceph.com/issues/59380 -- new tracker
863
    6. https://tracker.ceph.com/issues/59080
864
    7. https://tracker.ceph.com/issues/59057
865
    8. https://tracker.ceph.com/issues/59196
866
867
Details:
868
    1. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
869
    2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
870
    3. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
871
    4. rook: failed to pull kubelet image - Ceph - Orchestrator
872
    5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
873
    6. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
874
    7. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
875
    8. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
876
877
h3. https://trello.com/c/wCN5TQud/1729-wip-yuri4-testing-2023-03-31-1237
878
879
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-31-1237
880
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2023-04-07-1825
881
882
Failures, unrelated:
883
    1. https://tracker.ceph.com/issues/17945
884
    2. https://tracker.ceph.com/issues/59049
885
    3. https://tracker.ceph.com/issues/59196
886
    4. https://tracker.ceph.com/issues/56393
887
    5. https://tracker.ceph.com/issues/58946
888
    6. https://tracker.ceph.com/issues/49287
889
    7. https://tracker.ceph.com/issues/55347
890
    8. https://tracker.ceph.com/issues/59057
891
    9. https://tracker.ceph.com/issues/59380
892
893
Details:
894
    1. ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test - Ceph - RADOS
895
    2. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
896
    3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
897
    4. failed to complete snap trimming before timeout - Ceph - RADOS
898
    5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
899
    6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
900
    7. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
901
    8. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
902
    9. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
903
904
h3. https://trello.com/c/8Xlz4rIH/1727-wip-yuri11-testing-2023-03-31-1108-old-wip-yuri11-testing-2023-03-28-0950
905
906
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-03-28-0950
907
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-03-31-1108
908
909
Failures, unrelated:
910
    1. https://tracker.ceph.com/issues/58585
911
    2. https://tracker.ceph.com/issues/58946
912
    3. https://tracker.ceph.com/issues/58265
913
    4. https://tracker.ceph.com/issues/59271
914
    5. https://tracker.ceph.com/issues/59057
915
    6. https://tracker.ceph.com/issues/59333
916
    7. https://tracker.ceph.com/issues/59334
917
    8. https://tracker.ceph.com/issues/59335
918
919
Details:
920
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
921
    2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
922
    3. TestClsRbd.group_snap_list_max_read failure during upgrade/parallel tests - Ceph - RBD
923
    4. mon: FAILED ceph_assert(osdmon()->is_writeable()) - Ceph - RADOS
924
    5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
925
    6. PgScrubber: timeout on reserving replicas - Ceph - RADOS
926
    7. test_pool_create_with_quotas: Timed out after 60s and 0 retries - Ceph - Mgr - Dashboard
927
    8. Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
928
929
h3. https://trello.com/c/yauI7omb/1726-wip-yuri7-testing-2023-03-29-1100-old-wip-yuri7-testing-2023-03-28-0942
930
931
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-03-29-1100
932
933
Failures, unrelated:
934
    1. https://tracker.ceph.com/issues/59192
935
    2. https://tracker.ceph.com/issues/58585
936
    3. https://tracker.ceph.com/issues/59057
937
    4. https://tracker.ceph.com/issues/58946
938
    5. https://tracker.ceph.com/issues/55347
939
    6. https://tracker.ceph.com/issues/59196
940
    7. https://tracker.ceph.com/issues/47838
941
    8. https://tracker.ceph.com/issues/59080
942
943
Details:
944
    1. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
945
    2. rook: failed to pull kubelet image - Ceph - Orchestrator
946
    3. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
947
    4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
948
    5. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
949
    6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
950
    7. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
951
    8. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
952
953
h3. https://trello.com/c/epwSlEHP/1722-wip-yuri4-testing-2023-03-25-0714-old-wip-yuri4-testing-2023-03-24-0910
954
955
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-25-0714
956
957
Failures, unrelated:
958
    1. https://tracker.ceph.com/issues/58946
959
    2. https://tracker.ceph.com/issues/59196
960
    3. https://tracker.ceph.com/issues/59271
961
    4. https://tracker.ceph.com/issues/58560
962
    5. https://tracker.ceph.com/issues/51964
963
    6. https://tracker.ceph.com/issues/58560
964
    7. https://tracker.ceph.com/issues/59192
965
966
Details:
967
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
968
    2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
969
    3. mon: FAILED ceph_assert(osdmon()->is_writeable()) - Ceph - RADOS
970
    4. rook: failed to pull kubelet image - Ceph - Orchestrator
971
    5. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph  - CephFS
972
    6. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Ceph - RADOS
973
    7. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
974
975
h3. https://trello.com/c/PEo71l9g/1720-wip-aclamk-bs-elastic-shared-blob-save-25032023-a-old-wip-lflores-testing-2023-03-22-2113
976
977
http://pulpito.front.sepia.ceph.com/?branch=wip-aclamk-bs-elastic-shared-blob-save-25.03.2023-a
978
979
Failures, unrelated:
980
    1. https://tracker.ceph.com/issues/59058
981
    2. https://tracker.ceph.com/issues/56034
982
    3. https://tracker.ceph.com/issues/58585
983
    4. https://tracker.ceph.com/issues/59172
984
    5. https://tracker.ceph.com/issues/56192
985
    6. https://tracker.ceph.com/issues/49287
986
    7. https://tracker.ceph.com/issues/58758
987
    8. https://tracker.ceph.com/issues/58946
988
    9. https://tracker.ceph.com/issues/59057
989
    10. https://tracker.ceph.com/issues/59192
990
991
Details:
992
    1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
993
    2. qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3() - Ceph - RADOS
994
    3. rook: failed to pull kubelet image - Ceph - Orchestrator
995
    4. test_pool_min_size: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
996
    5. crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty()) - Ceph - RADOS
997
    6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
998
    7. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
999
    8. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
1000
    9. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
1001
    10. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
1002
1003
h3. https://trello.com/c/Qa8vTuf8/1717-wip-yuri4-testing-2023-03-15-1418
1004
1005
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-15-1418
1006
1007
Failures, unrelated:
1008
    1. https://tracker.ceph.com/issues/58946
1009
    2. https://tracker.ceph.com/issues/56393
1010
    3. https://tracker.ceph.com/issues/59123
1011
    4. https://tracker.ceph.com/issues/58585
1012
    5. https://tracker.ceph.com/issues/58560
1013
    6. https://tracker.ceph.com/issues/59127
1014
1015
Details:
1016
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
1017
    2. thrash-erasure-code-big: failed to complete snap trimming before timeout - Ceph - RADOS
1018
    3. Timeout opening channel - Infrastructure
1019
    4. rook: failed to pull kubelet image - Ceph - Orchestrator
1020
    5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1021
    6. Job that normally complete much sooner last almost 12 hours - Infrastructure
1022
1023
h3. https://trello.com/c/fo5GZ0YC/1712-wip-yuri7-testing-2023-03-10-0830
1024
1025
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-03-10-0830
1026
1027
Failures, unrelated:
1028
    1. https://tracker.ceph.com/issues/58946
1029
    2. https://tracker.ceph.com/issues/59079
1030
    3. https://tracker.ceph.com/issues/59080
1031
    4. https://tracker.ceph.com/issues/58585
1032
    6. https://tracker.ceph.com/issues/59057
1033
1034
Details:
1035
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
1036
    2. AssertionError: timeout expired in wait_for_all_osds_up - Ceph - RADOS
1037
    3. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
1038
    4. rook: failed to pull kubelet image - Ceph - Orchestrator
1039
    5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
1040
1041
h3. https://trello.com/c/EbLKJDPm/1685-wip-yuri11-testing-2023-03-08-1220-old-wip-yuri11-testing-2023-03-01-1424-old-wip-yuri11-testing-2023-02-20-1329-old-wip-yuri11
1042
1043
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri11-testing-2023-03-08-1220
1044
1045
Failures, unrelated:
1046
    1. https://tracker.ceph.com/issues/58585
1047
    2. https://tracker.ceph.com/issues/58560
1048
    3. https://tracker.ceph.com/issues/58946
1049
    4. https://tracker.ceph.com/issues/49287
1050
    5. https://tracker.ceph.com/issues/57755
1051
    6. https://tracker.ceph.com/issues/52316
1052
    7. https://tracker.ceph.com/issues/58496
1053
1054
Details:
1055
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1056
    2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1057
    3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
1058
    4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1059
    5. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
1060
    6. qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons) - Ceph - RADOS
1061
    7. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
1062
1063
h3. https://trello.com/c/u5ydxGCS/1698-wip-yuri7-testing-2023-02-27-1105
1064
1065
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-02-27-1105
1066
1067
Failures, unrelated:
1068
    1. https://tracker.ceph.com/issues/58585
1069
    2. https://tracker.ceph.com/issues/58475
1070
    3. https://tracker.ceph.com/issues/57754
1071
    4. https://tracker.ceph.com/issues/50786
1072
    5. https://tracker.ceph.com/issues/49287
1073
1074
Details:
1075
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1076
    2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1077
    3. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
1078
    4. UnicodeDecodeError: 'utf8' codec can't decode byte - Ceph - RADOS
1079
    5. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1080
1081
h3. https://trello.com/c/hIlO2MJn/1706-wip-yuri8-testing-2023-03-07-1527
1082
1083
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-03-07-1527
1084
1085
Failures, unrelated:
1086
    1. https://tracker.ceph.com/issues/49287
1087
    2. https://tracker.ceph.com/issues/58585
1088
    3. https://tracker.ceph.com/issues/58560
1089
    4. https://tracker.ceph.com/issues/58946
1090
    5. https://tracker.ceph.com/issues/51964
1091
1092
Details:
1093
    1. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1094
    2. rook: failed to pull kubelet image - Ceph - Orchestrator
1095
    3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1096
    4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
1097
    5. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
1098
1099
h3. https://trello.com/c/bLUA7Wf5/1705-wip-yuri4-testing-2023-03-08-1234-old-wip-yuri4-testing-2023-03-07-1351
1100
1101
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-08-1234
1102
1103
Failures, unrelated:
1104
    1. https://tracker.ceph.com/issues/58946
1105
    2. https://tracker.ceph.com/issues/58560
1106
    3. https://tracker.ceph.com/issues/58585
1107
    4. https://tracker.ceph.com/issues/55347
1108
    5. https://tracker.ceph.com/issues/49287
1109
1110
Details:
1111
    1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
1112
    2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1113
    3. rook: failed to pull kubelet image - Ceph - Orchestrator
1114
    4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1115
    5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
1116
1117
h3. Main baseline 2/24/23
1118
1119
https://pulpito.ceph.com/?sha1=f9d812a56231a14fafcdfb339f87d3d9a9e6e55f
1120
1121
Failures:
1122
    1. https://tracker.ceph.com/issues/58560
1123
    2. https://tracker.ceph.com/issues/57771
1124
    3. https://tracker.ceph.com/issues/58585
1125
    4. https://tracker.ceph.com/issues/58475
1126
    5. https://tracker.ceph.com/issues/58758
1127
    6. https://tracker.ceph.com/issues/58797
1128
    7. https://tracker.ceph.com/issues/58893 -- new tracker
1129
    8. https://tracker.ceph.com/issues/49428
1130
    9. https://tracker.ceph.com/issues/55347
1131
    10. https://tracker.ceph.com/issues/58800
1132
1133
Details:
1134
    1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1135
    2. orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails - Ceph - Orchestrator
1136
    3. rook: failed to pull kubelet image - Ceph - Orchestrator
1137
    4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21
1138
    5. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
1139
    6. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test"
1140
    7. test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired - Ceph - RADOS
1141
    8. ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed with error -22" - Ceph - RADOS
1142
    9. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1143
    10. ansible: Failed to update apt cache: unknown reason - Infrastructure - Sepia
1144
1145
h3. https://trello.com/c/t4cQVOvQ/1695-wip-yuri10-testing-2023-02-22-0848
1146
1147
http://pulpito.front.sepia.ceph.com:80/yuriw-2023-02-22_21:31:50-rados-wip-yuri10-testing-2023-02-22-0848-distro-default-smithi
1148
http://pulpito.front.sepia.ceph.com:80/yuriw-2023-02-23_16:14:52-rados-wip-yuri10-testing-2023-02-22-0848-distro-default-smithi
1149
1150
Failures, unrelated:
1151
    1. https://tracker.ceph.com/issues/57754
1152
    2. https://tracker.ceph.com/issues/58475
1153
    3. https://tracker.ceph.com/issues/58585
1154
    4. https://tracker.ceph.com/issues/58797
1155
1156
Details:
1157
    1. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
1158
    2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1159
    3. rook: failed to pull kubelet image - Ceph - Orchestrator
1160
    4. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test"
1161
1162
h3. https://trello.com/c/hrTt8qIn/1693-wip-yuri6-testing-2023-02-24-0805-old-wip-yuri6-testing-2023-02-21-1406
1163
1164
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-02-24-0805
1165
1166
Failures, unrelated:
1167
    1. https://tracker.ceph.com/issues/58585
1168
    2. https://tracker.ceph.com/issues/58560
1169
    3. https://tracker.ceph.com/issues/58797
1170
    4. https://tracker.ceph.com/issues/58744
1171
    5. https://tracker.ceph.com/issues/58475
1172
1173
Details:
1174
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1175
    2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1176
    3. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
1177
    4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
1178
    5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1179
1180
1181
1182
h3. https://trello.com/c/gleu2p6U/1689-wip-yuri-testing-2023-02-22-2037-old-wip-yuri-testing-2023-02-16-0839
1183
1184
https://pulpito.ceph.com/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi
1185
https://pulpito.ceph.com/lflores-2023-02-20_21:22:20-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi
1186
https://pulpito.ceph.com/yuriw-2023-02-23_16:42:54-rados-wip-yuri-testing-2023-02-22-2037-distro-default-smithi
1187
https://pulpito.ceph.com/lflores-2023-02-23_17:54:36-rados-wip-yuri-testing-2023-02-22-2037-distro-default-smithi
1188
1189
Failures, unrelated:
1190
    1. https://tracker.ceph.com/issues/58585
1191
    2. https://tracker.ceph.com/issues/58560
1192
    3. https://tracker.ceph.com/issues/58496
1193
    4. https://tracker.ceph.com/issues/49961
1194
    5. https://tracker.ceph.com/issues/58861
1195
    6. https://tracker.ceph.com/issues/58797
1196
    7. https://tracker.ceph.com/issues/49428
1197
1198
Details:
1199
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1200
    2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
1201
    3. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
1202
    4. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS
1203
    5. OSError: cephadm config file not found - Ceph - Orchestrator
1204
    6. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
1205
    7. ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed with error -22" - Ceph - RADOS
1206
1207
h3. https://trello.com/c/FzMz7O3S/1683-wip-yuri10-testing-2023-02-15-1245-old-wip-yuri10-testing-2023-02-06-0846-old-wip-yuri10-testing-2023-02-06-0809
1208
1209
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2023-02-15-1245
1210
1211
Failures, unrelated:
1212
    1. https://tracker.ceph.com/issues/58585
1213
    2. https://tracker.ceph.com/issues/58475
1214
    3. https://tracker.ceph.com/issues/58797 -- new tracker; seen in main baseline, therefore unrelated to trackers in this batch
1215
1216
Details:
1217
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1218
    2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1219
    3. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
1220
1221
h3. https://trello.com/c/buZUPZx0/1680-wip-yuri2-testing-2023-02-08-1429-old-wip-yuri2-testing-2023-02-06-1140-old-wip-yuri2-testing-2023-01-26-1532
1222
1223
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-01-26-1532
1224
1225
Failures, unrelated:
1226
    1. https://tracker.ceph.com/issues/58496 -- fix in progress
1227
    2. https://tracker.ceph.com/issues/58585
1228
    3. https://tracker.ceph.com/issues/58475
1229
    4. https://tracker.ceph.com/issues/57754
1230
    5. https://tracker.ceph.com/issues/49287
1231
    6. https://tracker.ceph.com/issues/57731
1232
    7. https://tracker.ceph.com/issues/54829
1233
    8. https://tracker.ceph.com/issues/52221
1234
1235
Details:
1236
    1. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
1237
    2. rook: failed to pull kubelet image - Ceph - Orchestrator
1238
    3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1239
    4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
1240
    5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
1241
    6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1242
    7. crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t*) const: assert(num_down_in_osds <= num_in_osds) - Ceph - RADOS
1243
    8. crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end()) - Ceph - RADOS
1244
1245
h3. https://trello.com/c/GA6hud1j/1674-wip-yuri-testing-2023-01-23-0926-old-wip-yuri-testing-2023-01-12-0816-old-wip-yuri-testing-2023-01-11-0818-old-wip-yuri-testing
1246
1247
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-01-23-0926
1248
1249
Failures, unrelated:
1250
    1. https://tracker.ceph.com/issues/58587 -- new tracker
1251
    2. https://tracker.ceph.com/issues/58585
1252
    3. https://tracker.ceph.com/issues/58098 -- fix merged to latest main
1253
    4. https://tracker.ceph.com/issues/58256 -- fix merged to latest main
1254
    5. https://tracker.ceph.com/issues/57900
1255
    6. https://tracker.ceph.com/issues/58475
1256
    7. https://tracker.ceph.com/issues/58560
1257
1258
Details:
1259
    1. test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist - Ceph - RADOS
1260
    2. rook: failed to pull kubelet image - Ceph - Orchestrator
1261
    3. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
1262
    4. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
1263
    5. mon/crush_ops.sh: mons out of quorum - Ceph - RADOS
1264
    6. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1265
    7. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph
1266
1267
h3. https://trello.com/c/583LyrTc/1667-wip-yuri2-testing-2023-01-23-0928-old-wip-yuri2-testing-2023-01-12-0816-old-wip-yuri2-testing-2023-01-11-0819-old-wip-yuri2-test
1268
1269
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-01-23-0928
1270
1271
Failures, unrelated:
1272
    1. https://tracker.ceph.com/issues/58585 -- new tracker
1273
    2. https://tracker.ceph.com/issues/58256 -- fix merged to latest main
1274
    3. https://tracker.ceph.com/issues/58475
1275
    4. https://tracker.ceph.com/issues/57754 -- closed
1276
    5. https://tracker.ceph.com/issues/57546 -- fix is in testing
1277
1278
Details:
1279
    1. rook: failed to pull kubelet image - Ceph - Orchestrator
1280
    2. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
1281
    3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1282
    4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
1283
    5. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1284
1285
h3. main baseline review -- https://pulpito.ceph.com/yuriw-2023-01-12_20:11:41-rados-main-distro-default-smithi/
1286
1287
Failures:
1288
    1. https://tracker.ceph.com/issues/58098 -- fix is in testing; holdup is issues with rhel satellite
1289
    2. https://tracker.ceph.com/issues/58258
1290
    3. https://tracker.ceph.com/issues/56000
1291
    4. https://tracker.ceph.com/issues/57632 -- fix is awaiting a review from the core team
1292
    5. https://tracker.ceph.com/issues/58475 -- new tracker
1293
    6. https://tracker.ceph.com/issues/57731
1294
    7. https://tracker.ceph.com/issues/58476 -- new tracker
1295
    8. https://tracker.ceph.com/issues/57303
1296
    9. https://tracker.ceph.com/issues/58256 -- fix is in testing
1297
    10. https://tracker.ceph.com/issues/57546 -- fix is in testing
1298
    11. https://tracker.ceph.com/issues/58496 -- new tracker
1299
1300
Details:
1301
    1. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
1302
    2. rook: kubelet fails from connection refused - Ceph - Orchestrator
1303
    3. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
1304
    4. test_envlibrados_for_rocksdb: free(): invalid pointer - Ceph - RADOS
1305
    5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
1306
    6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1307
    7. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
1308
    8. qa/workunits/post-file.sh: postfile@drop.ceph.com: Permission denied - Ceph
1309
    9. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
1310
    10. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1311
    11. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
1312
1313
h3. https://trello.com/c/Mi1gMNFu/1662-wip-yuri-testing-2022-12-06-1204
1314
1315
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-12-06-1204
1316
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-12-12-1136
1317
1318
Failures, unrelated:
1319
    1. https://tracker.ceph.com/issues/58096
1320
    2. https://tracker.ceph.com/issues/52321
1321
    3. https://tracker.ceph.com/issues/58173
1322
    4. https://tracker.ceph.com/issues/52129
1323
    5. https://tracker.ceph.com/issues/58097
1324
    6. https://tracker.ceph.com/issues/57546
1325
    7. https://tracker.ceph.com/issues/58098
1326
    8. https://tracker.ceph.com/issues/57731
1327
    9. https://tracker.ceph.com/issues/55606
1328
    10. https://tracker.ceph.com/issues/58256
1329
    11. https://tracker.ceph.com/issues/58258
1330
1331
Details:
1332
    1. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
1333
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1334
    3. api_aio_pp: failure on LibRadosAio.SimplePoolEIOFlag and LibRadosAio.PoolEIOFlag - Ceph - RADOS
1335
    4. LibRadosWatchNotify.AioWatchDelete failed - Ceph - RADOS
1336
    5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Ceph - RADOS
1337
    6. rook: ensure CRDs are installed first - Ceph - Orchestrator
1338
    7. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
1339
    8. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1340
    9. [ERR] Unhandled exception from module ''devicehealth'' while running on mgr.y: unknown - Ceph - CephSqlite
1341
    10. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - Bluestore
1342
    11. rook: kubelet fails from connection refused - Ceph - Orchestrator
1343
1344
h3. https://trello.com/c/8pqA5fF3/1663-wip-yuri3-testing-2022-12-06-1211
1345
1346
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-12-06-1211
1347
1348
Failures, unrelated:
1349
    1. https://tracker.ceph.com/issues/57311
1350
    2. https://tracker.ceph.com/issues/58098
1351
    3. https://tracker.ceph.com/issues/58096
1352
    4. https://tracker.ceph.com/issues/52321
1353
    5. https://tracker.ceph.com/issues/57731
1354
    6. https://tracker.ceph.com/issues/57546
1355
1356
Details:
1357
    1. rook: ensure CRDs are installed first - Ceph - Orchestrator
1358
    2. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
1359
    3. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
1360
    4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1361
    5. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1362
    6. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1363
1364
h3. https://trello.com/c/QrtToRWE/1643-wip-yuri6-testing-2022-11-23-1348-old-wip-yuri6-testing-2022-10-05-0912-old-wip-yuri6-testing-2022-09-29-0908
1365
1366
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-11-23-1348
1367
1368
Failures, unrelated:
1369
    1. https://tracker.ceph.com/issues/58098
1370
    2. https://tracker.ceph.com/issues/58096
1371
    3. https://tracker.ceph.com/issues/57311
1372
    4. https://tracker.ceph.com/issues/58097
1373
    5. https://tracker.ceph.com/issues/57731
1374
    6. https://tracker.ceph.com/issues/57311
1375
    7. https://tracker.ceph.com/issues/51945
1376
1377
Details:
1378
    1. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
1379
    2. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
1380
    3. rook: ensure CRDs are installed first - Ceph - Orchestrator
1381
    4. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
1382
    5. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1383
    6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1384
    7. qa/workunits/mon/caps.sh: Error: Expected return 13, got 0 - Ceph - RADOS
1385
1386
h3. https://trello.com/c/hdiNA6Zq/1651-wip-yuri7-testing-2022-10-17-0814
1387
1388
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2022-10-17-0814
1389
1390
Failures:, unrelated
1391
    1. https://tracker.ceph.com/issues/57311
1392
    2. https://tracker.ceph.com/issues/52321
1393
    3. https://tracker.ceph.com/issues/52657
1394
    4. https://tracker.ceph.com/issues/57935
1395
    5. https://tracker.ceph.com/issues/58097
1396
    6. https://tracker.ceph.com/issues/58096
1397
    7. https://tracker.ceph.com/issues/58098
1398
    8. https://tracker.ceph.com/issues/57731
1399
    9. https://tracker.ceph.com/issues/58098
1400
1401
Details:
1402
    1. rook: ensure CRDs are installed first - Ceph - Orchestrator
1403
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1404
    2. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1405
    3. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
1406
    4. all test jobs get stuck at "Running task ansible.cephlab..." - Infrstructure - Sepia
1407
    5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
1408
    6. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
1409
    7. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
1410
    8. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1411
    9. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
1412
1413
h3. https://trello.com/c/h2f7yhfz/1657-wip-yuri4-testing-2022-11-10-1051
1414
1415
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-11-10-1051
1416
1417
Failures, unrelated:
1418
    1. https://tracker.ceph.com/issues/57311
1419
    2. https://tracker.ceph.com/issues/58097
1420
    3. https://tracker.ceph.com/issues/55347
1421
    4. https://tracker.ceph.com/issues/57731
1422
    5. https://tracker.ceph.com/issues/57790
1423
    6. https://tracker.ceph.com/issues/52321
1424
    7. https://tracker.ceph.com/issues/58046
1425
    8. https://tracker.ceph.com/issues/54372
1426
    9. https://tracker.ceph.com/issues/56000
1427
    10. https://tracker.ceph.com/issues/58098
1428
1429
Details:
1430
1431
    1. rook: ensure CRDs are installed first - Ceph - Orchestrator
1432
    2. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
1433
    3. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
1434
    4. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1435
    5. Unable to locate package libcephfs1 - Infrastructure
1436
    6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1437
    7. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS
1438
    8. No module named 'tasks' - Infrastructure
1439
    9. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
1440
    10. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
1441
1442
h3. https://trello.com/c/zahAzjLl/1652-wip-yuri10-testing-2022-11-22-1711-old-wip-yuri10-testing-2022-11-10-1137-old-wip-yuri10-testing-2022-10-19-0810-old-wip-yuri10
1443
1444
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-10-19-0810
1445
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-11-22-1711
1446
1447
Failures:
1448
    1. https://tracker.ceph.com/issues/52321
1449
    2. https://tracker.ceph.com/issues/58096 -- new tracker; unrelated to PR in this test batch
1450
    3. https://tracker.ceph.com/issues/57311
1451
    4. https://tracker.ceph.com/issues/58097 -- new tracker; unrelated to PR in this test batch
1452
    5. https://tracker.ceph.com/issues/58098 -- new tracker; unrelated to PR in this test batch
1453
    6. https://tracker.ceph.com/issues/57731
1454
    7. https://tracker.ceph.com/issues/57546
1455
    8. https://tracker.ceph.com/issues/52129
1456
    9. https://tracker.ceph.com/issues/57754
1457
    10. https://tracker.ceph.com/issues/57755
1458
    11. https://tracker.ceph.com/issues/58099 -- new tracker; flagged, but ultimately deemed unrelated by PR author
1459
1460
Details:
1461
    1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1462
    2. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
1463
    3. rook: ensure CRDs are installed first - Ceph - Orchestrator
1464
    4. qa/workunits/post-file.sh: Connection reset by peer - Ceph - RADOS
1465
    5. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
1466
    6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1467
    7. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1468
    8. LibRadosWatchNotify.AioWatchDelete failed - Ceph - RADOS
1469
    9. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
1470
    10. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
1471
    11. ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixPreferDeferred/2 fails - Ceph - Bluestore
1472 28 Laura Flores
1473
h3. https://trello.com/c/Jm1c0Z5d/1631-wip-yuri4-testing-2022-09-27-1405-old-wip-yuri4-testing-2022-09-20-0734-old-wip-yuri4-testing-2022-09-14-0617-old-wip-yuri4-test
1474
1475
http://pulpito.front.sepia.ceph.com/?branch=wip-all-kickoff-r
1476
1477
Failures, unrelated:
1478
    1. https://tracker.ceph.com/issues/57386
1479
    2. https://tracker.ceph.com/issues/52321
1480
    3. https://tracker.ceph.com/issues/57731
1481
    4. https://tracker.ceph.com/issues/57311
1482
    5. https://tracker.ceph.com/issues/50042
1483
    6. https://tracker.ceph.com/issues/57546
1484
1485
Details:
1486
    1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
1487
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1488
    3. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1489
    4. rook: ensure CRDs are installed first - Ceph - Orchestrator
1490
    5. rados/test.sh: api_watch_notify failures - Ceph - RADOS
1491
    6. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1492
1493
h3. https://trello.com/c/K7im36rK/1632-wip-yuri7-testing-2022-09-27-0743-old-wip-yuri7-testing-2022-09-26-0828-old-wip-yuri7-testing-2022-09-07-0820
1494
1495
http://pulpito.front.sepia.ceph.com/?branch=wip-lflores-testing
1496
1497
Failures, unrelated:
1498
    1. https://tracker.ceph.com/issues/57311
1499
    2. https://tracker.ceph.com/issues/57754 -- created a new Tracker; looks unrelated and was also found on a different test branch
1500
    3. https://tracker.ceph.com/issues/57386
1501
    4. https://tracker.ceph.com/issues/52321
1502
    5. https://tracker.ceph.com/issues/55142
1503
    6. https://tracker.ceph.com/issues/57731
1504
    7. https://tracker.ceph.com/issues/57755 -- created a new Tracker; unrelated to PR in this run
1505
    8. https://tracker.ceph.com/issues/57756 -- created a new Tracker; unrealted to PR in this run
1506
    9. https://tracker.ceph.com/issues/57757 -- created a new Tracker; seems unrelated since there was an instance tracked in Telemetry. Also, it is not from the area of code that was touched in this PR.
1507
    10. https://tracker.ceph.com/issues/57546
1508
    11. https://tracker.ceph.com/issues/53575
1509
1510
Details:
1511
    1. rook: ensure CRDs are installed first - Ceph - Orchestrator
1512
    2. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
1513
    3. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
1514
    4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1515
    5. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - Cephsqlite
1516
    6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
1517
    7. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
1518
    8. upgrade: notify retry canceled due to unrecoverable error after 1 attempts: unexpected status code 404: https://172.21.15.74:8443//api/prometheus_receiver" - Ceph
1519
    9. ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer' - Ceph - RADOS
1520
    10. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
1521
    11. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
1522
1523
h3. https://trello.com/c/YRh3jaSk/1636-wip-yuri3-testing-2022-09-21-0921
1524
1525
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-09-21-0921
1526
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-26_23:41:33-rados-wip-yuri3-testing-2022-09-26-1342-distro-default-smithi/
1527
1528
Failures, unrealted:
1529
    1. https://tracker.ceph.com/issues/57311
1530
    2. https://tracker.ceph.com/issues/55853
1531
    3. https://tracker.ceph.com/issues/52321
1532
1533
Details:
1534
    1. rook: ensure CRDs are installed first - Ceph - Orchestrator
1535
    2. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1536
    3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1537 27 Laura Flores
1538
h3. https://trello.com/c/6s76bhl0/1605-wip-yuri8-testing-2022-08-22-0646-old-wip-yuri8-testing-2022-08-19-0725-old-wip-yuri8-testing-2022-08-12-0833
1539
1540
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2022-08-22-0646
1541
1542
Failures:
1543
    1. https://tracker.ceph.com/issues/57269
1544
    2. https://tracker.ceph.com/issues/52321
1545
    3. https://tracker.ceph.com/issues/57270
1546
    4. https://tracker.ceph.com/issues/55853
1547
    5. https://tracker.ceph.com/issues/45721
1548
    6. https://tracker.ceph.com/issues/37660
1549
    7. https://tracker.ceph.com/issues/57122
1550
    8. https://tracker.ceph.com/issues/57165
1551
    9. https://tracker.ceph.com/issues/57303
1552
    10 https://tracker.ceph.com/issues/56574
1553
    11. https://tracker.ceph.com/issues/55986
1554
    12. https://tracker.ceph.com/issues/57332
1555
1556
Details:
1557
    1. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
1558
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1559
    3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
1560
    4. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1561
    5. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
1562
    6. smithi195:'Failing rest of playbook due to missing NVMe card' - Infrastructure - Sepia
1563
    7. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
1564
    8. expected valgrind issues and found none - Ceph - RADOS
1565
    9. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7  - Ceph - Orchestrator
1566
    10. rados/valgrind-leaks: cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log - Ceph - RADOS
1567
    11. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
1568
    12. centos 8.stream and rhel 8.6 jobs fail to install ceph-test package due to xmlstarlet dependency - Ceph
1569 26 Laura Flores
1570
h3. https://trello.com/c/0Hp833bV/1613-wip-yuri11-testing-2022-08-24-0658-old-wip-yuri11-testing-2022-08-22-1005
1571
1572
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2022-08-22-1005
1573
http://pulpito.front.sepia.ceph.com/lflores-2022-08-25_17:56:48-rados-wip-yuri11-testing-2022-08-24-0658-distro-default-smithi/
1574
1575
Failures, unrelated:
1576
    1. https://tracker.ceph.com/issues/57122
1577
    2. https://tracker.ceph.com/issues/55986
1578
    3. https://tracker.ceph.com/issues/57270
1579
    4. https://tracker.ceph.com/issues/57165
1580
    5. https://tracker.ceph.com/issues/57207
1581
    6. https://tracker.ceph.com/issues/57268
1582
    7. https://tracker.ceph.com/issues/52321
1583
    8. https://tracker.ceph.com/issues/56573
1584
    9. https://tracker.ceph.com/issues/57163
1585
    10. https://tracker.ceph.com/issues/51282
1586
    11. https://tracker.ceph.com/issues/57310 -- opened a new Tracker for this; first time this has appeared, but it doesn't seem related to the PR tested in this run.
1587
    12. https://tracker.ceph.com/issues/55853
1588
    13. https://tracker.ceph.com/issues/57311 -- opened a new Tracker for this; unrelated to PR tested in this run
1589
1590
Details:
1591
    1. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
1592
    2. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
1593
    3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
1594
    4. expected valgrind issues and found none - Ceph - RADOS
1595
    5. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
1596
    6. rook: The CustomResourceDefinition "installations.operator.tigera.io" is invalid - Ceph - Orchestrator
1597
    7. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
1598
    8. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
1599
    9. free(): invalid pointer - Ceph - RADOS
1600
    10. pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings - Ceph - Mgr
1601
    11. StriperTest: The futex facility returned an unexpected error code - Ceph - RADOS
1602
    12. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RADOS
1603
    13. rook: ensure CRDs are installed first - Ceph - Orchestrator
1604 25 Laura Flores
1605
h3. https://trello.com/c/bTwMHBB1/1608-wip-yuri5-testing-2022-08-18-0812-old-wip-yuri5-testing-2022-08-16-0859
1606
1607
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-08-18-0812
1608
1609
Failures, unrelated:
1610
    1. https://tracker.ceph.com/issues/57207
1611
    2. https://tracker.ceph.com/issues/52321
1612
    3. https://tracker.ceph.com/issues/57270
1613
    4. https://tracker.ceph.com/issues/57122
1614
    5. https://tracker.ceph.com/issues/55986
1615
    6. https://tracker.ceph.com/issues/57302
1616
1617
Details:
1618
    1. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
1619
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1620
    3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
1621
    4. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
1622
    5. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
1623
    7. ERROR: test_get_status (tasks.mgr.dashboard.test_cluster.ClusterTest) mgr/dashboard: short_description - Ceph - Mgr - Dashboard
1624 24 Laura Flores
1625
h3. https://trello.com/c/TMFa8xSl/1581-wip-yuri8-testing-2022-07-18-0918-old-wip-yuri8-testing-2022-07-12-1008-old-wip-yuri8-testing-2022-07-11-0903
1626
1627
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-07-18-0918
1628
1629
Failures, unrelated:
1630
    1. https://tracker.ceph.com/issues/56573
1631
    2. https://tracker.ceph.com/issues/56574
1632
    3. https://tracker.ceph.com/issues/56574
1633
    4. https://tracker.ceph.com/issues/55854
1634
    5. https://tracker.ceph.com/issues/53422
1635
    6. https://tracker.ceph.com/issues/55853
1636
    7. https://tracker.ceph.com/issues/52124
1637
1638
Details:
1639
    1. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
1640
    2. rados/valgrind-leaks: cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log - Ceph - RADOS
1641
    3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
1642
    4. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
1643
    5. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
1644
    6. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1645
    7. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1646 23 Laura Flores
1647
h3. https://trello.com/c/8wxrTRRy/1558-wip-yuri5-testing-2022-06-16-0649
1648
1649
https://pulpito.ceph.com/yuriw-2022-06-16_18:33:18-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/
1650
https://pulpito.ceph.com/yuriw-2022-06-17_13:52:49-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/
1651
1652
Failures, unrelated:
1653
    1. https://tracker.ceph.com/issues/55853
1654
    2. https://tracker.ceph.com/issues/52321
1655
    3. https://tracker.ceph.com/issues/45721
1656
    4. https://tracker.ceph.com/issues/55986
1657
    5. https://tracker.ceph.com/issues/44595
1658
    6. https://tracker.ceph.com/issues/55854
1659
    7. https://tracker.ceph.com/issues/56097 -- opened a new Tracker for this; historically, this has occurred previously on a Pacific test branch, so it does not seem related to this PR.
1660
    8. https://tracker.ceph.com/issues/56098 -- opened a new Tracker for this; this is the first sighting that I am aware of, but it does not seem related to the tested PR.
1661
1662
Details:
1663
    1. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1664
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1665
    3. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
1666
    4. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Cephadm
1667
    5. cache tiering: Error: oid 48 copy_from 493 returned error code -2 - Ceph - RADOS
1668
    6. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
1669
    7. Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats` - Ceph - RADOS
1670
    8. api_tier_pp: failure on LibRadosTwoPoolsPP.ManifestRefRead - Ceph - RADOS
1671 22 Laura Flores
1672
h3. https://trello.com/c/eGWSLHXA/1550-wip-yuri8-testing-2022-06-13-0701-old-wip-yuri8-testing-2022-06-07-1522
1673
1674
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-13_16:37:57-rados-wip-yuri8-testing-2022-06-13-0701-distro-default-smithi/
1675
1676
Failures, unrelated:
1677
    1. https://tracker.ceph.com/issues/53575
1678
    2. https://tracker.ceph.com/issues/55986
1679
    3. https://tracker.ceph.com/issues/55853
1680
    4. https://tracker.ceph.com/issues/52321
1681
    5. https://tracker.ceph.com/issues/55741
1682
    6. https://tracker.ceph.com/issues/51835
1683
1684
Details:
1685
    1. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
1686
    2. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
1687
    3. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1688
    4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1689
    5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1690
    6. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - RADOS
1691 21 Laura Flores
1692
h3. https://trello.com/c/HGpb1F4j/1549-wip-yuri7-testing-2022-06-13-0706-old-wip-yuri7-testing-2022-06-07-1325
1693
1694
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-13_16:36:31-rados-wip-yuri7-testing-2022-06-13-0706-distro-default-smithi/
1695
1696
Failures, unrelated:
1697
    1. https://tracker.ceph.com/issues/55986
1698
    2. https://tracker.ceph.com/issues/52321
1699
    3. https://tracker.ceph.com/issues/52124
1700
    4. https://tracker.ceph.com/issues/52316
1701
    5. https://tracker.ceph.com/issues/55322
1702
    6. https://tracker.ceph.com/issues/55741
1703
    7. https://tracker.ceph.com/issues/56034 --> new Tracker; unrelated to the PRs in this run.
1704
1705
Details:
1706
    1. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
1707
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1708
    3. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1709
    4. qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons) - Ceph - RADOS
1710
    5. test-restful.sh: mon metadata unable to be retrieved - Ceph - Mgr
1711
    6. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1712
    7. qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3() - Ceph - RADOS
1713 20 Laura Flores
1714
h3. https://trello.com/c/SUV9RgLi/1552-wip-yuri3-testing-2022-06-09-1314
1715
1716
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-09_22:06:32-rados-wip-yuri3-testing-2022-06-09-1314-distro-default-smithi/
1717
1718
Failures, unrelated:
1719
    1. https://tracker.ceph.com/issues/52321
1720
    2. https://tracker.ceph.com/issues/55971
1721
    3. https://tracker.ceph.com/issues/55853
1722
    4. https://tracker.ceph.com/issues/56000 --> opened a new Tracker for this; unrelated to the PR tested in this run.
1723
    5. https://tracker.ceph.com/issues/55741
1724
    6. https://tracker.ceph.com/issues/55142
1725
1726
Details:
1727
    1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1728
    2. LibRadosMiscConnectFailure.ConnectFailure test failure - Ceph - CephFS
1729
    3. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1730
    4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - CephFS
1731
    5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1732
    6. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - cephsqlite
1733 19 Laura Flores
1734
h3. https://trello.com/c/MaWPkMXi/1544-wip-yuri7-testing-2022-06-02-1633
1735
1736
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-03_14:09:08-rados-wip-yuri7-testing-2022-06-02-1633-distro-default-smithi/
1737
1738
Failures, unrelated:
1739
    1. https://tracker.ceph.com/issues/55741
1740
    2. https://tracker.ceph.com/issues/52321
1741
    3. https://tracker.ceph.com/issues/55808
1742
    4. https://tracker.ceph.com/issues/55853 --> opened a new Tracker for this; unrelated to the PR tested in this run.
1743
    5. https://tracker.ceph.com/issues/55854 --> opened a new Tracker for this; unrelated to the PR tested in this run.
1744
    6. https://tracker.ceph.com/issues/55856 --> opened a new Tracker for this; unrelated to the PR tested in this run.
1745
1746
Details:
1747
    1. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1748
    2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
1749
    3. task/test_nfs: KeyError: 'events' - Ceph - CephFS
1750
    4. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
1751
    5. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
1752
    6. ObjectStore/StoreTest.CompressionTest/2 fails when a collection expects an object not to exist, but it does - Ceph - BlueStore
1753 18 Laura Flores
1754
h3. https://trello.com/c/BYYdvJNP/1536-wip-yuri-testing-2022-05-27-0934
1755
1756
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_21:59:17-rados-wip-yuri-testing-2022-05-27-0934-distro-default-smithi/
1757
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-28_13:38:20-rados-wip-yuri-testing-2022-05-27-0934-distro-default-smithi/
1758
1759
Failures, unrelated:
1760
    1. https://tracker.ceph.com/issues/51904
1761
    2. https://tracker.ceph.com/issues/51835
1762
    3. https://tracker.ceph.com/issues/52321
1763
    4. https://tracker.ceph.com/issues/55741
1764
    5. https://tracker.ceph.com/issues/52124
1765
    6. https://tracker.ceph.com/issues/55142
1766
    7. https://tracker.ceph.com/issues/55808 -- opened a new Tracker for this issue; it is unrelated to the PRs that were tested.
1767
    8. https://tracker.ceph.com/issues/55809 -- opened a new Tracker for this; it is unrelated to the PRs that were tested.
1768
1769
Details:
1770
    1. AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
1771
    2. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - Mgr
1772
    3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
1773
    4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1774
    5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1775
    6. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O er ror - Ceph - CephSqlite
1776
    7. task/test_nfs: KeyError: 'events' - Ceph - CephFS
1777
    8. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS
1778 17 Laura Flores
1779
h3. https://trello.com/c/JWN6xaC5/1534-wip-yuri7-testing-2022-05-18-1636
1780
1781
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-19_01:43:57-rados-wip-yuri7-testing-2022-05-18-1636-distro-default-smithi/
1782
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-19_01:43:57-rados-wip-yuri7-testing-2022-05-18-1636-distro-default-smithi/
1783
1784
Failures, unrelated:
1785
    1. https://tracker.ceph.com/issues/52124
1786
    2. https://tracker.ceph.com/issues/55741
1787
    3. https://tracker.ceph.com/issues/51835
1788
    4. https://tracker.ceph.com/issues/52321
1789
1790
Details:
1791
    1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1792
    2. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
1793
    3. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - RADOS
1794
    4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1795 16 Laura Flores
1796
h3. https://trello.com/c/NXVtDT7z/1505-wip-yuri2-testing-2022-04-22-0500-old-yuri2-testing-2022-04-18-1150
1797
1798
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-22_13:56:48-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
1799
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-23_16:21:59-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
1800
http://pulpito.front.sepia.ceph.com/lflores-2022-04-25_16:23:25-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
1801
1802
Failures, unrelated:
1803
    https://tracker.ceph.com/issues/55419
1804
    https://tracker.ceph.com/issues/55429
1805
    https://tracker.ceph.com/issues/54458
1806
1807
Details:
1808
    1. cephtool/test.sh: failure on blocklist testing - Ceph - RADOS
1809
    2. mgr/dashboard: AttributeError: 'NoneType' object has no attribute 'group' - Ceph - Mgr - Dashboard
1810
    3. osd-scrub-snaps.sh: TEST_scrub_snaps failed due to malformed log message - Ceph - RADOS
1811 15 Laura Flores
1812
h3. https://trello.com/c/s7NuYSTa/1509-wip-yuri2-testing-2022-04-13-0703
1813
1814
https://pulpito.ceph.com/nojha-2022-04-13_16:47:41-rados-wip-yuri2-testing-2022-04-13-0703-distro-basic-smithi/
1815
1816
Failures, unrelated:
1817
    https://tracker.ceph.com/issues/53789
1818
    https://tracker.ceph.com/issues/55322
1819
    https://tracker.ceph.com/issues/55323
1820
1821
Details:
1822
    1. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
1823
    2. test-restful.sh: mon metadata unable to be retrieved - Ceph - RADOS
1824
    3. cephadm/test_dashboard_e2e.sh: cypress "500: Internal Server Error" caused by missing password - Ceph - Mgr - Dashboard
1825
1826
h3. https://trello.com/c/1yaPNXSG/1507-wip-yuri7-testing-2022-04-11-1139
1827
1828
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-11_23:40:29-rados-wip-yuri7-testing-2022-04-11-1139-distro-default-smithi/
1829
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-12_15:25:49-rados-wip-yuri7-testing-2022-04-11-1139-distro-default-smithi/
1830
1831
Failures, unrelated:
1832
    https://tracker.ceph.com/issues/55295
1833
    https://tracker.ceph.com/issues/54372
1834
1835
Details:
1836
    1. Dead job caused by "AttributeError: 'NoneType' object has no attribute '_fields'" on smithi055 - Intrastructure - Sepia
1837
    2. No module named 'tasks' - Infrastructure
1838 14 Laura Flores
1839
h3. https://trello.com/c/nJwB8bHf/1497-wip-yuri3-testing-2022-04-01-0659-old-wip-yuri3-testing-2022-03-31-1158
1840
1841
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-01_17:44:32-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
1842
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-02_01:57:28-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
1843
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-02_14:56:39-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
1844
1845
Failures, unrelated:
1846
    https://tracker.ceph.com/issues/53422
1847
    https://tracker.ceph.com/issues/47838
1848
    https://tracker.ceph.com/issues/47025
1849
    https://tracker.ceph.com/issues/51076
1850
    https://tracker.ceph.com/issues/55178
1851
1852
Details:
1853
    1. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
1854
    2. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
1855
    3. rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed - Ceph - RADOS
1856
    4. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
1857
    5. osd-scrub-test.sh: TEST_scrub_extended_sleep times out - Ceph - RADOS
1858 13 Laura Flores
1859
h3. https://trello.com/c/QxTQADSe/1487-wip-yuri-testing-2022-03-24-0726-old-wip-yuri-testing-2022-03-23-1337
1860
1861
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-24_16:44:32-rados-wip-yuri-testing-2022-03-24-0726-distro-default-smithi/
1862
1863
Failures, unrelated:
1864
    https://tracker.ceph.com/issues/54990
1865
    https://tracker.ceph.com/issues/52124
1866
    https://tracker.ceph.com/issues/51904
1867 12 Laura Flores
1868
h3. https://trello.com/c/p6Ew1Pq4/1481-wip-yuri7-testing-2022-03-21-1529
1869
1870
http://pulpito.front.sepia.ceph.com:80/yuriw-2022-03-22_00:42:53-rados-wip-yuri7-testing-2022-03-21-1529-distro-default-smithi/
1871
1872
Failures, unrelated:
1873
    https://tracker.ceph.com/issues/53680
1874
    https://tracker.ceph.com/issues/52320
1875
    https://tracker.ceph.com/issues/52657
1876
1877
h3. https://trello.com/c/v331Ll3Y/1478-wip-yuri6-testing-2022-03-18-1104-old-wip-yuri6-testing-2022-03-17-1547
1878
1879
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-19_14:37:23-rados-wip-yuri6-testing-2022-03-18-1104-distro-default-smithi/
1880
1881
Failures, unrelated:
1882
    https://tracker.ceph.com/issues/54990
1883
    https://tracker.ceph.com/issues/54329
1884
    https://tracker.ceph.com/issues/53680
1885
    https://tracker.ceph.com/issues/49888
1886
    https://tracker.ceph.com/issues/52124
1887
    https://tracker.ceph.com/issues/55001
1888
    https://tracker.ceph.com/issues/52320
1889
    https://tracker.ceph.com/issues/55009
1890
1891
h3. https://trello.com/c/hrDifkIO/1471-wip-yuri3-testing-2022-03-09-1350
1892
1893
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-10_02:41:10-rados-wip-yuri3-testing-2022-03-09-1350-distro-default-smithi/
1894
1895
Failures, unrelated:
1896
    https://tracker.ceph.com/issues/54529
1897
    https://tracker.ceph.com/issues/54307
1898
    https://tracker.ceph.com/issues/51076
1899
    https://tracker.ceph.com/issues/53680
1900
1901
Details:
1902
    1. mon/mon-bind.sh: Failure due to cores found
1903
    2. test_cls_rgw.sh: 'index_list_delimited' test times out
1904
    3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
1905
    4. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1906
1907
h3. https://trello.com/c/6g22dJPJ/1469-wip-yuri5-testing-2022-03-07-0958
1908
1909
Failures, unrelated:
1910
    https://tracker.ceph.com/issues/48873
1911
    https://tracker.ceph.com/issues/53680
1912
1913
h3. https://trello.com/c/CcFET7cb/1470-wip-yuri-testing-2022-03-07-0958
1914
1915
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-07_22:06:10-rados-wip-yuri-testing-2022-03-07-0958-distro-default-smithi/
1916
1917
Failures, unrelated:
1918
    https://tracker.ceph.com/issues/50280
1919
    https://tracker.ceph.com/issues/53680
1920
    https://tracker.ceph.com/issues/51076
1921
1922
Details:
1923
    1. cephadm: RuntimeError: uid/gid not found - Ceph
1924
    2. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1925
    3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
1926 11 Laura Flores
1927
h3. https://trello.com/c/IclLwlHA/1467-wip-yuri4-testing-2022-03-01-1206
1928
1929
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-01_22:42:19-rados-wip-yuri4-testing-2022-03-01-1206-distro-default-smithi/
1930
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-02_15:47:04-rados-wip-yuri4-testing-2022-03-01-1206-distro-default-smithi/
1931
1932
Failures, unrelated:
1933
    https://tracker.ceph.com/issues/52124
1934
    https://tracker.ceph.com/issues/52320
1935
    https://tracker.ceph.com/issues/53680
1936
1937
Details:
1938
    1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1939
    2. unable to get monitor info from DNS SRV with service name: ceph-mon - Ceph - Orchestrator
1940
    3. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1941 10 Laura Flores
1942 9 Laura Flores
h3. https://trello.com/c/81yzd6MX/1434-wip-yuri6-testing-2022-02-14-1456-old-wip-yuri6-testing-2022-01-26-1547
1943 10 Laura Flores
1944
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-02-14-1456
1945
1946
Failures, unrelated:
1947
    https://tracker.ceph.com/issues/52124
1948
    https://tracker.ceph.com/issues/51076
1949
    https://tracker.ceph.com/issues/54438
1950
1951
Details:
1952
    1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1953
    2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
1954 9 Laura Flores
    3. test/objectstore/store_test.cc: FAILED ceph_assert(bl_eq(state->contents[noid].data, r2)) in function 'virtual void SyntheticWorkloadState::C_SyntheticOnClone::finish(int)' - Ceph - RADOS
1955
1956
h3. https://trello.com/c/9GAwJxub/1450-wip-yuri4-testing-2022-02-18-0800-old-wip-yuri4-testing-2022-02-14-1512
1957
1958
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-02-18-0800
1959
1960
Failures, unrelated:
1961
    https://tracker.ceph.com/issues/45721
1962
    https://tracker.ceph.com/issues/53422
1963
    https://tracker.ceph.com/issues/51627
1964
    https://tracker.ceph.com/issues/53680
1965
    https://tracker.ceph.com/issues/52320
1966
    https://tracker.ceph.com/issues/52124
1967
1968
Details:
1969
    1. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
1970
    2. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
1971
    3. FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
1972
    4. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
1973
    5. unable to get monitor info from DNS SRV with service name: ceph-mon - Ceph - Orchstrator
1974
    6. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
1975 8 Laura Flores
1976
h3. https://trello.com/c/ba4bDdJQ/1457-wip-yuri3-testing-2022-02-17-1256
1977
1978
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-17_22:49:55-rados-wip-yuri3-testing-2022-02-17-1256-distro-default-smithi/
1979
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-21_20:37:48-rados-wip-yuri3-testing-2022-02-17-1256-distro-default-smithi/
1980
1981
Failures, unrelated:
1982
    https://tracker.ceph.com/issues/49287
1983
    https://tracker.ceph.com/issues/54086
1984
    https://tracker.ceph.com/issues/51076
1985
    https://tracker.ceph.com/issues/54360
1986
1987
Details:
1988
    Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
1989
    Bug_#54086: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
1990
    Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
1991
    Bug_#54360: Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph
1992
1993
h3. https://trello.com/c/qSYwEWdA/1453-wip-yuri11-testing-2022-02-15-1643
1994
1995
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-16_15:53:49-rados-wip-yuri11-testing-2022-02-15-1643-distro-default-smithi/
1996
1997
Failures, unrelated:
1998
    https://tracker.ceph.com/issues/54307
1999
    https://tracker.ceph.com/issues/54306
2000
    https://tracker.ceph.com/issues/52124
2001
2002
Details:
2003
    test_cls_rgw.sh: 'index_list_delimited' test times out - Ceph - RGW
2004
    tasks.cephfs.test_nfs.TestNFS.test_create_multiple_exports: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
2005
    Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2006
2007
h3. https://trello.com/c/ubP4w0OV/1438-wip-yuri5-testing-2022-02-09-1322-pacific-old-wip-yuri5-testing-2022-02-08-0733-pacific-old-wip-yuri5-testing-2022-02-02-0936-pa
2008
2009
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-09_22:52:18-rados-wip-yuri5-testing-2022-02-09-1322-pacific-distro-default-smithi/
2010
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-08_17:00:23-rados-wip-yuri5-testing-2022-02-08-0733-pacific-distro-default-smithi/
2011
2012
Failures, unrelated:
2013
    https://tracker.ceph.com/issues/53501
2014
    https://tracker.ceph.com/issues/51234
2015
    https://tracker.ceph.com/issues/52124
2016
    https://tracker.ceph.com/issues/48997
2017
    https://tracker.ceph.com/issues/45702
2018
    https://tracker.ceph.com/issues/50222
2019
    https://tracker.ceph.com/issues/51904
2020
2021
Details:
2022
    Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
2023
    Bug_#51234: LibRadosService.StatusFormat failed, Expected: (0) != (retry), actual: 0 vs 0 - Ceph - RADOS
2024
    Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2025
    BUg_#48997: rados/singleton/all/recovery-preemption: defer backfill|defer recovery not found in logs - Ceph - RADOS
2026
    Bug_#45702: PGLog::read_log_and_missing: ceph_assert(miter == missing.get_items().end() || (miter->second.need == i->version && miter->second.have == eversion_t())) - Ceph - RADOS
2027
    Bug_#50222: osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
2028
    Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
2029
2030
h3. https://trello.com/c/djEk6FIL/1441-wip-yuri2-testing-2022-02-04-1646-pacific-old-wip-yuri2-testing-2022-02-04-1646-pacific-old-wip-yuri2-testing-2022-02-04-1559-pa
2031
2032
Failures:
2033
    https://tracker.ceph.com/issues/54086
2034
    https://tracker.ceph.com/issues/54071
2035
    https://tracker.ceph.com/issues/53501
2036
    https://tracker.ceph.com/issues/51904
2037
    https://tracker.ceph.com/issues/54210
2038
    https://tracker.ceph.com/issues/54211
2039
    https://tracker.ceph.com/issues/54212
2040
2041
Details:
2042
    Bug_#54086: pacific: tasks/dashboard: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
2043
    Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
2044
    Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
2045
    Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
2046
    
2047
    Bug_#54218: mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get a pg_num | grep 256'" - Ceph - RADOS
2048
    Bug_#54211: pacific: test_devicehealth failure due to RADOS object not found (error opening pool 'device_health_metrics') - Ceph - Mgr
2049
    Bug_#54212: pacific: test_pool_configuration fails due to "AssertionError: 400 != 200" - Ceph - Mgr
2050
2051
h3. yuriw-2022-01-27_15:09:25-rados-wip-yuri6-testing-2022-01-26-1547-distro-default-smithi
2052
2053
https://trello.com/c/81yzd6MX/1434-wip-yuri6-testing-2022-01-26-1547
2054
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-27_15:09:25-rados-wip-yuri6-testing-2022-01-26-1547-distro-default-smithi/
2055
2056
Failures:
2057
    https://tracker.ceph.com/issues/53767
2058
    https://tracker.ceph.com/issues/50192
2059
2060
Details:
2061
    Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
2062
    Bug_#50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
2063
2064
h3. yuriw-2022-01-27_14:57:16-rados-wip-yuri-testing-2022-01-26-1810-pacific-distro-default-smithi
2065
2066
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-27_14:57:16-rados-wip-yuri-testing-2022-01-26-1810-pacific-distro-default-smithi/
2067
https://trello.com/c/qoIF7T3R/1416-wip-yuri-testing-2022-01-26-1810-pacific-old-wip-yuri-testing-2022-01-07-0928-pacific
2068
2069
Failures, unrelated:
2070
    https://tracker.ceph.com/issues/54071
2071
    https://tracker.ceph.com/issues/53501
2072
    https://tracker.ceph.com/issues/50280
2073
    https://tracker.ceph.com/issues/45318
2074
    https://tracker.ceph.com/issues/54086
2075
    https://tracker.ceph.com/issues/51076
2076
2077
Details:
2078
    Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
2079
    Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
2080
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2081
    Bug_#45318: octopus: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running tasks/mon_clock_no_skews.yaml - Ceph - RADOS
2082
    Bug_#54086: pacific: tasks/dashboard: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
2083
    Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
2084
2085
h3. yuriw-2022-01-24_17:43:02-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi
2086
2087
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-24_17:43:02-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi/
2088
2089
Failures:
2090
    https://tracker.ceph.com/issues/53857
2091
    https://tracker.ceph.com/issues/53501
2092
    https://tracker.ceph.com/issues/54071
2093
2094
Details:
2095
    Bug_#53857: qa: fs:upgrade test fails mds count check - Ceph - CephFS
2096
    Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
2097
    Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
2098
2099
h3. yuriw-2022-01-24_18:01:47-rados-wip-yuri10-testing-2022-01-24-0810-octopus-distro-default-smithi
2100
2101
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-24_18:01:47-rados-wip-yuri10-testing-2022-01-24-0810-octopus-distro-default-smithi/
2102
2103
Failures, unrelated:
2104
    https://tracker.ceph.com/issues/50280
2105
    https://tracker.ceph.com/issues/45318
2106
2107
Details:
2108
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2109
    Bug_#45318: octopus: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running tasks/mon_clock_no_skews.yaml
2110
2111
h3. yuriw-2022-01-21_15:22:24-rados-wip-yuri7-testing-2022-01-20-1609-distro-default-smithi
2112
2113
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-21_15:22:24-rados-wip-yuri7-testing-2022-01-20-1609-distro-default-smithi/
2114
2115
Failures, unrelated:
2116
    https://tracker.ceph.com/issues/53843
2117
    https://tracker.ceph.com/issues/53827
2118
    https://tracker.ceph.com/issues/49287
2119
    https://tracker.ceph.com/issues/53807
2120
    
2121
Details:
2122
    Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
2123
    Bug_#53827: cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
2124
    Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
2125
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2126
2127
h3. yuriw-2022-01-15_05:47:18-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi
2128
2129
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-15_05:47:18-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi/
2130
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-17_17:14:22-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi/
2131
2132
2133
Failures, unrelated:
2134
    https://tracker.ceph.com/issues/45721
2135
    https://tracker.ceph.com/issues/50280
2136
    https://tracker.ceph.com/issues/53827
2137
    https://tracker.ceph.com/issues/51076
2138
    https://tracker.ceph.com/issues/53807
2139
    https://tracker.ceph.com/issues/53842
2140
2141
Details:
2142
    Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test
2143
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2144
    Bug_#53827: cephadm exited with error code when creating osd. - Ceph - Orchestrator
2145
    Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2146
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2147
    Bug_#53842: cephadm/mds_upgrade_sequence: KeyError: 'en***'
2148
2149
h3. yuriw-2022-01-17_17:05:17-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi
2150
2151
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-14_23:22:09-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi/
2152
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-17_17:05:17-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi/
2153
2154
Failures, unrelated:
2155
    https://tracker.ceph.com/issues/53843
2156
    https://tracker.ceph.com/issues/53872
2157
    https://tracker.ceph.com/issues/45721
2158
    https://tracker.ceph.com/issues/53807
2159
2160
Details:
2161
    Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
2162
    Bug_#53872: Errors detected in generated GRUB config file
2163
    Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test
2164
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2165
2166
h3. yuriw-2022-01-13_14:57:55-rados-wip-yuri5-testing-2022-01-12-1534-distro-default-smithi
2167
2168
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-13_14:57:55-rados-wip-yuri5-testing-2022-01-12-1534-distro-default-smithi/
2169
2170
Failures:
2171
    https://tracker.ceph.com/issues/45721
2172
    https://tracker.ceph.com/issues/53843
2173
    https://tracker.ceph.com/issues/49483
2174
    https://tracker.ceph.com/issues/50280
2175
    https://tracker.ceph.com/issues/53807
2176
    https://tracker.ceph.com/issues/51904
2177
    https://tracker.ceph.com/issues/53680
2178
2179
2180
Details:
2181
    Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
2182
    Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
2183
    Bug_#49483: CommandFailedError: Command failed on smithi104 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/... - Ceph - Orchestrator
2184
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2185
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2186
    Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
2187
    Bug_#53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2188
2189
h3. yuriw-2022-01-12_21:37:22-rados-wip-yuri6-testing-2022-01-12-1131-distro-default-smithi
2190
2191
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-12_21:37:22-rados-wip-yuri6-testing-2022-01-12-1131-distro-default-smithi/
2192
2193
Failures, unrelated:
2194
    https://tracker.ceph.com/issues/53843
2195
    https://tracker.ceph.com/issues/51904
2196
    https://tracker.ceph.com/issues/53807
2197
    https://tracker.ceph.com/issues/53767
2198
    https://tracker.ceph.com/issues/51307
2199
2200
Details:
2201
    Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
2202
    Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
2203
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2204
    Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
2205
    Bug_#51307: LibRadosWatchNotify.Watch2Delete fails - Ceph - RADOS
2206
2207
h3. yuriw-2022-01-11_19:17:55-rados-wip-yuri5-testing-2022-01-11-0843-distro-default-smithi
2208
2209
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-11_19:17:55-rados-wip-yuri5-testing-2022-01-11-0843-distro-default-smithi/
2210
2211
Failures:
2212
    https://tracker.ceph.com/issues/53843
2213
    https://tracker.ceph.com/issues/52124
2214
    https://tracker.ceph.com/issues/53827
2215
    https://tracker.ceph.com/issues/53855
2216
    https://tracker.ceph.com/issues/53807
2217
    https://tracker.ceph.com/issues/51076
2218
2219
Details:
2220
    Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
2221
    Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2222
    Bug_#53827: cephadm exited with error code when creating osd. - Ceph - Orchestrator
2223
    Bug_#53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount - Ceph - RADOS
2224
    Bug_#53424: Ceph - Orchestrator: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
2225
    Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
2226
    Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
2227
2228
h3. yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi
2229
2230
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi/
2231
2232
Failures, unrelated:
2233
    https://tracker.ceph.com/issues/53789
2234
    https://tracker.ceph.com/issues/53422
2235
    https://tracker.ceph.com/issues/50192
2236
    https://tracker.ceph.com/issues/53807
2237
    https://tracker.ceph.com/issues/53424
2238
    
2239
Details:
2240
    Bug_#53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
2241
    Bug_#53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
2242
    Bug_#50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
2243
    Bug_#53807: Hidden ansible output and offline filesystem failures lead to dead jobs - Ceph - CephFS
2244
    Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
2245 7 Laura Flores
2246
h3. lflores-2022-01-05_19:04:35-rados-wip-lflores-mgr-rocksdb-distro-default-smithi
2247
2248
http://pulpito.front.sepia.ceph.com/lflores-2022-01-05_19:04:35-rados-wip-lflores-mgr-rocksdb-distro-default-smithi/
2249
2250
Failures, unrelated:
2251
2252
    https://tracker.ceph.com/issues/53781
2253
    https://tracker.ceph.com/issues/53499
2254
    https://tracker.ceph.com/issues/49287
2255
    https://tracker.ceph.com/issues/53789
2256
    https://tracker.ceph.com/issues/53424
2257
    https://tracker.ceph.com/issues/49483
2258
    https://tracker.ceph.com/issues/53842
2259
2260
Details:
2261
    Bug_#53781: cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal` when testing on orchestrator/03-inventory.e2e-spec.ts - Ceph - Mgr - Dashboard
2262
    Bug_#53499: testdashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
2263
    Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
2264
    Bug_#53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
2265
    Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
2266
    Bug_#49483: CommandFailedError: Command failed on smithi104 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/... - Ceph - Orchestrator
2267
    Bug_#53842: cephadm/mds_upgrade_sequence: KeyError: 'en***' - Ceph - Orchestrator
2268
2269
h3. yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi
2270
2271
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi/
2272
2273
2274
Failures, unrelated:
2275
    https://tracker.ceph.com/issues/53723
2276
    https://tracker.ceph.com/issues/38357
2277
    https://tracker.ceph.com/issues/53294
2278
    https://tracker.ceph.com/issues/53424
2279
    https://tracker.ceph.com/issues/53680
2280
    https://tracker.ceph.com/issues/53782
2281
    https://tracker.ceph.com/issues/53781
2282
    
2283
Details:
2284
    Bug_#53723: Cephadm agent fails to report and causes a health timeout - Ceph - Orchestrator
2285
    Bug_#38357: ClsLock.TestExclusiveEphemeralStealEphemeral failed - Ceph - RADOS
2286
    Bug_#53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS
2287
    Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
2288
    Bug_#53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2289
    Bug_#53782: site-packages/paramiko/transport.py: Invalid packet blocking causes unexpected end of data - Infrastructure
2290
    Bug_#53781: cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal` when testing on orchestrator/03-inventory.e2e-spec.ts - Ceph - Mgr - Dashboard
2291
2292
h3. yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi
2293
2294
http://pulpito.front.sepia.ceph.com/yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi/
2295
2296
Failures related to #43865:
2297
2298
    6582615 -- Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\''
2299
2300
2301
Failures, unrelated:
2302
    https://tracker.ceph.com/issues/53499
2303
    https://tracker.ceph.com/issues/52124
2304
    https://tracker.ceph.com/issues/52652
2305
    https://tracker.ceph.com/issues/53422
2306
    https://tracker.ceph.com/issues/51945
2307
    https://tracker.ceph.com/issues/53424
2308
    https://tracker.ceph.com/issues/53394
2309
    https://tracker.ceph.com/issues/53766
2310
    https://tracker.ceph.com/issues/53767
2311
2312
Details:
2313
    Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
2314
    Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2315
    Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
2316
    Bug_#53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
2317
    Bug_#51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0 - Ceph - RADOS
2318
    Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
2319
    Bug_#53394: cephadm: can infer config from mon from different cluster causing file not found error - Ceph - Orchestrator
2320
    Bug_#53766: ceph orch ls: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
2321
    Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
2322
2323
h3. yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi
2324
2325
http://pulpito.front.sepia.ceph.com/yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi/
2326
2327
6580187, 6580436 -- https://tracker.ceph.com/issues/52124
2328
Command failed (workunit test rados/test.sh) on smithi037 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1121b3c9661a85cfbc852d654ea7d22c1d1be751 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' 
2329
2330
6580226, 6580440 -- https://tracker.ceph.com/issues/38455
2331
Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) 
2332
2333
6580242-- https://tracker.ceph.com/issues/53499
2334
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1121b3c9661a85cfbc852d654ea7d22c1d1be751 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' 
2335
2336
6580330 -- https://tracker.ceph.com/issues/53681
2337
Command failed on smithi185 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1121b3c9661a85cfbc852d654ea7d22c1d1be751 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c10ca7c-63a8-11ec-8c31-001a4aab830c -- ceph mon dump -f json' 
2338
2339
6580439 -- https://tracker.ceph.com/issues/53723
2340
timeout expired in wait_until_healthy 
2341
2342
2343
6580078, 6580296 -- https://tracker.ceph.com/issues/53424
2344
hit max job timeout 
2345
2346
6580192
2347
hit max job timeout
2348
2349
Failures, unrelated:
2350
2351
6580187, 6580436 -- https://tracker.ceph.com/issues/52124
2352
6580226, 6580440 -- https://tracker.ceph.com/issues/38455
2353
6580242-- https://tracker.ceph.com/issues/53499
2354
6580330 -- https://tracker.ceph.com/issues/53681
2355
6580439 -- https://tracker.ceph.com/issues/53723
2356
6580078, 6580296 -- https://tracker.ceph.com/issues/53424
2357
6580192 -- https://tracker.ceph.com/issues/51076
2358
2359
Details:
2360
2361
Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2362
Bug_#38455: Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest): RuntimeError: Synthetic exception in serve - Ceph - Mgr
2363
Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
2364
Bug_#53681: Failed to extract uid/gid for path /var/lib/ceph - Ceph - Orchestrator
2365
Bug_#53723: Cephadm agent fails to report and causes a health timeout - Ceph - Orchestrator
2366
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
2367
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
2368
2369
h3. yuriw-2021-12-21_15:47:03-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi
2370
2371
http://pulpito.front.sepia.ceph.com/yuriw-2021-12-21_15:47:03-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/
2372
2373
Failures, unrelated:
2374
2375
6576068 -- https://tracker.ceph.com/issues/53499
2376
6576071 -- https://tracker.ceph.com/issues/53615 -- timeout after healthy
2377
2378
Details:
2379
2380
Bug_#53499: Ceph - Mgr - Dashboard: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed.
2381
Bug_#53448: Ceph - Orchestrator: cephadm: agent failures double reported by two health checks
2382
2383
h3. yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi
2384
2385
http://pulpito.front.sepia.ceph.com/yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi/
2386
2387
YES
2388
6569383 -- ceph_objectstore_tool test
2389
"2021-12-18T01:25:23.848389+0000 osd.5 (osd.5) 1 : cluster [ERR] map e73 had wrong heartbeat front addr ([v2:0.0.0.0:6844/122637,v1:0.0.0.0:6845/122637] != my [v2:172.21.15.2:6844/122637,v1:172.21.15.2:6845/122637])" in cluster log 
2390
2391
YES
2392
6569399: -- https://tracker.ceph.com/issues/53681
2393
Failed to extract uid/gid
2394
2021-12-18T01:38:21.360 INFO:teuthology.orchestra.run.smithi049.stderr:ERROR: Failed to extract uid/gid for path /var/lib/ceph: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint stat --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:91fdab49fed87aa0a3dbbceccc27e84ab4f80130 -e NODE_NAME=smithi049 -e CEPH_USE_RANDOM_NONCE=1 quay.ceph.io/ceph-ci/ceph:91fdab49fed87aa0a3dbbceccc27e84ab4f80130 -c %u %g /var/lib/ceph: Error: OCI runtime error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: Unit libpod-2b9797e9757bd79dbc4b77f0751f4bf7a30b0618828534759fcebba7819e72f7.scope not found.
2395
2396
YES
2397
6569450 -- https://tracker.ceph.com/issues/53499
2398
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=91fdab49fed87aa0a3dbbceccc27e84ab4f80130 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' 
2399
2400
YES
2401
6569647 -- https://tracker.ceph.com/issues/53615
2402
2021-12-18T04:14:40.526 INFO:teuthology.orchestra.run.smithi190.stdout:{"status":"HEALTH_WARN","checks":{"CEPHADM_AGENT_DOWN":{"severity":"HEALTH_WARN","summary":{"message":"1 Cephadm Agent(s) are not reporting. Hosts may be offline","count":1},"muted":false},"CEPHADM_FAILED_DAEMON":{"severity":"HEALTH_WARN","summary":{"message":"1 failed cephadm daemon(s)","count":1},"muted":false}},"mutes":[]} 
2403
2021-12-18T04:14:40.929 INFO:journalctl@ceph.mon.a.smithi190.stdout:Dec 18 04:14:40 smithi190 bash[14624]: cluster 2021-12-18T04:14:39.122970+0000 mgr.a (mgr.14152) 343 : cluster [DBG] pgmap v329: 1 pgs: 1 active+clean; 577 KiB data, 18 MiB used, 268 GiB / 268 GiB avail 
2404
2021-12-18T04:14:40.930 INFO:journalctl@ceph.mon.a.smithi190.stdout:Dec 18 04:14:40 smithi190 bash[14624]: audit 2021-12-18T04:14:40.524789+0000 mon.a (mon.0) 349 : audit [DBG] from='client.? 172.21.15.190:0/570196741' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 
2405
2021-12-18T04:14:41.209 INFO:tasks.cephadm:Teardown begin 
2406
2021-12-18T04:14:41.209 ERROR:teuthology.contextutil:Saw exception from nested tasks 
2407
Traceback (most recent call last): 
2408
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/contextutil.py", line 33, in nested 
2409
    yield vars 
2410
  File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/cephadm.py", line 1548, in task 
2411
    healthy(ctx=ctx, config=config) 
2412
  File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/ceph.py", line 1469, in healthy 
2413
    manager.wait_until_healthy(timeout=300) 
2414
  File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/ceph_manager.py", line 3146, in wait_until_healthy 
2415
    'timeout expired in wait_until_healthy' 
2416
AssertionError: timeout expired in wait_until_healthy 
2417
2418
YES
2419
6569286 -- https://tracker.ceph.com/issues/53424
2420
hit max job timeout
2421
cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)
2422
2423
YES
2424
6569344 -- https://tracker.ceph.com/issues/53680
2425
ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds
2426
2427
YES
2428
6569400 -- https://tracker.ceph.com/issues/51847
2429
AssertionError: wait_for_recovery: failed before timeout expired
2430
2431
YES
2432
6569567
2433
[ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2434
2435
2436
Failures to watch:
2437
    6569383 -- ceph_objectstore_tool test
2438
    
2439
Failures unrelated:
2440
    6569399: -- https://tracker.ceph.com/issues/53681
2441
    6569450 -- https://tracker.ceph.com/issues/53499
2442
    6569647 -- might be related to https://tracker.ceph.com/issues/53448
2443
    6569286 -- https://tracker.ceph.com/issues/53424
2444
    6569344 -- https://tracker.ceph.com/issues/53680
2445
    6569400 -- might be related to https://tracker.ceph.com/issues/51847
2446
    6569567 -- Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2447
    
2448
Details:
2449
    Bug_#53681: Ceph - Orchestrator: Failed to extract uid/gid for path /var/lib/ceph
2450
    Bug_#53499: Ceph - Mgr - Dashboard: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed.
2451
    Bug_#53448: Ceph - Orchestrator: cephadm: agent failures double reported by two health checks
2452
    Bug_#53424: Ceph - Orchestrator: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
2453
    Bug_#53680: Ceph - Orchestrator: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds
2454
    Bug_#51847: Ceph - RADOS: A PG in "incomplete" state may end up in a backfill loop.
2455
2456
h3. lflores-2021-12-19_03:36:08-rados-wip-bluestore-zero-detection-distro-default-smithi
2457
2458
http://pulpito.front.sepia.ceph.com/lflores-2021-12-19_03:36:08-rados-wip-bluestore-zero-detection-distro-default-smithi/
2459
http://pulpito.front.sepia.ceph.com/lflores-2021-12-19_18:26:29-rados-wip-bluestore-zero-detection-distro-default-smithi/
2460
2461
Failures, unrelated:
2462
    6572638 -- timeout expired in wait_until_healthy -- https://tracker.ceph.com/issues/53448
2463
    6572650, 6572644 -- failed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-12-18_18:14:24-rados-wip-yuriw-master-12.18.21-distro-default-smithi/6569986/
2464
    6572643, 6572648 -- https://tracker.ceph.com/issues/53499
2465
2466
Details:
2467
    Bug_#53448: cephadm: agent failures double reported by two health checks - Ceph - Orchestrator
2468
    Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
2469
2470
h3. lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi
2471
2472
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi
2473
2474
2475
Failures:
2476
2477
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556651/ -- src/osd/OSDMap.cc: 5835: FAILED ceph_assert(num_down_in_osds <= num_in_osds)
2478
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556696/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
2479
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556710/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
2480
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556544/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
2481
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556501/ -- osd.3 420 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.5218.0:7856 216.13 216:c84c9e4f:test-rados-api-smithi012-38462-88::foo:head [tier-flush] snapc 0=[] ondisk+read+ignore_cache+known_if_redirected+supports_pool_eio e419)
2482
2483
h3. yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi
2484
2485
http://pulpito.front.sepia.ceph.com/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi 
2486
 
2487
Failures, unrelated: 
2488
 
2489
    6553716, 6553740, 6553788, 6553822, 6553844, 6553876, 6553930, 6553953, 6553982, 6554000, 6554035, 6554063, 6554085 -- https://tracker.ceph.com/issues/53487 
2490
    6553768, 6553897 -- failed in recent master baseline: http://pulpito.front.sepia.ceph.com/yuriw-2021-12-07_00:28:11-rados-wip-master_12.6.21-distro-default-smithi/6549263/ 
2491
    6553774 -- https://tracker.ceph.com/issues/50280 
2492
    6553780, 6553993 -- https://tracker.ceph.com/issues/53499 
2493
    6553781, 6553994 -- https://tracker.ceph.com/issues/53501 
2494
    6554077 -- https://tracker.ceph.com/issues/51904 
2495
    6553724 -- https://tracker.ceph.com/issues/52657 
2496
    6553853 -- infrastructure failure 
2497
 
2498
Details: 
2499
 
2500
Bug_#53487: qa: mount error 22 = Invalid argument - Ceph - CephFS 
2501
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph 
2502
Bug_#53499: testdashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard 
2503
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator 
2504
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS 
2505
Bug_#52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
2506
2507
h3. yuriw-2021-11-20_18:00:22-rados-wip-yuri6-testing-2021-11-20-0807-distro-basic-smithi
2508
2509
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-20_18:00:22-rados-wip-yuri6-testing-2021-11-20-0807-distro-basic-smithi/
2510
2511
2512
6516255, 6516370, 6516487, 6516611, 6516729, 6516851, 6516967 -- not this exact Tracker, but similar: https://tracker.ceph.com/issues/46398 -- Command failed on smithi117 with status 5: 'sudo systemctl stop ceph-5f34df08-4a33-11ec-8c2c-001a4aab830c@mon.a'
2513
2514
6516264, 6516643 -- https://tracker.ceph.com/issues/50280 -- Command failed on smithi124 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9eae1762-4a33-11ec-8c2c-001a4aab830c -- ceph mon dump -f json'
2515
2516
6516453, 6516879 -- https://tracker.ceph.com/issues/53287 -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)
2517
2518
6516751 -- seen in the recent master baseline: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-25_15:23:56-rados-wip-yuriw-master-11.24.21-distro-basic-smithi/6526537/ -- Command failed (workunit test rados/test.sh) on smithi017 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'
2519
2520
6516753 -- https://tracker.ceph.com/issues/51945-- Command failed (workunit test mon/caps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'
2521
2522
6516755 -- https://tracker.ceph.com/issues/53345 -- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
2523
2524
6516787, 6516362 -- https://tracker.ceph.com/issues/53353 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2525
2526
6516903 -- Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'
2527
2528
2529
"nofallback" failure: https://tracker.ceph.com/issues/53487
2530
New "e2e" failure: https://tracker.ceph.com/issues/53499
2531
2532
h3. sage-2021-11-29_14:24:46-rados-master-distro-basic-smithi
2533
2534
https://pulpito.ceph.com/sage-2021-11-29_14:24:46-rados-master-distro-basic-smithi/
2535
2536
Failures tracked by:
2537
2538
    [6533605] -- https://tracker.ceph.com/issues/50280 -- Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 18561d74-5125-11ec-8c2d-001a4aab830c -- ceph osd crush tunables default'
2539
2540
2541
    [6533603, 6533628] -- https://tracker.ceph.com/issues/53287 -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)
2542
2543
2544
    [6533608, 6533616, 6533622, 6533627, 6533637, 6533642] -- Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:69b04de2932d00fc7fcaa14c718595ec42f18e67 pull'
2545
2546
2547
    [6533614] -- https://tracker.ceph.com/issues/53345 -- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
2548
2549
2550
    [6533623, 6533641] -- https://tracker.ceph.com/issues/53353 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2551
2552
2553
    [6533606] -- https://tracker.ceph.com/issues/50106 -- Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'
2554
2555
2556
Details:
2557
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2558
    Bug_#53287: test_standby (tasks.mgr.test_prometheus.TestPrometheus) fails - Ceph - Mgr
2559
    Bug_#53345: Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
2560
    Bug_#53353: mgr/dashboard: orchestrator/03-inventory.e2e-spec.ts failure - Ceph - Mgr - Dashboard
2561
    Bug_#50106: scrub/osd-scrub-repair.sh: corrupt_scrub_erasure: return 1 - Ceph - RADOS
2562
2563
h3. yuriw-2021-11-16_13:07:14-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
2564
2565
    Failures unrelated:
2566
2567
    6506499, 6506504 -- Command failed on smithi059 with status 5: 'sudo systemctl stop ceph-2a24c9ac-46f2-11ec-8c2c-001a4aab830c@mon.a' -- tracked by https://tracker.ceph.com/issues/46035
2568
2569
Details:
2570
2571
Bug_#44824: cephadm: adding osd device is not idempotent - Ceph - Orchestrator
2572
Bug_#52890: lsblk: vg_nvme/lv_4: not a block device - Tools - Teuthology
2573
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2574
Bug_#53123: mgr/dashboard: ModuleNotFoundError: No module named 'tasks.mgr.dashboard.test_ganesha' - Ceph - Mgr - Dashboard
2575
Bug_#8048: Teuthology error: mgr/prometheus fails with NewConnectionError - Ceph - Mgr
2576
Bug_#46035: Report the correct error when quay fails - Tools - Teuthology
2577 6 Laura Flores
2578
h3. https://trello.com/c/acNvAaS3/1380-wip-yuri4-testing-2021-11-15-1306
2579
2580
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-16_00:15:25-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
2581
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-16_13:07:14-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
2582
2583
Master baseline; Nov. 16th: http://pulpito.front.sepia.ceph.com/?branch=wip-yuriw-master-11.12.21
2584
2585
yuriw-2021-11-16_00:15:25-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
2586
2587
    Failures to watch:
2588
2589
    [6505076] -- Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7153db66-4692-11ec-8c2c-001a4aab830c -- ceph orch daemon add osd smithi073:vg_nvme/lv_3' -- could be related to https://tracker.ceph.com/issues/44824 or https://tracker.ceph.com/issues/52890
2590
2591
2592
    [6505401, 6505416] -- HTTPSConnectionPool(host='shaman.ceph.com', port=443): Max retries exceeded with url: /api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&sha1=f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fac788801d0>: Failed to establish a new connection: [Errno 110] Connection timed out',)) -- seen in a fairly recent master run according to Sentry
2593
2594
2595
    Failures unrelated:
2596
2597
    [6505055, 6505202] -- Command failed on smithi183 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b5fd1072-4690-11ec-8c2c-001a4aab830c -- ceph mon dump -f json' -- tracked by https://tracker.ceph.com/issues/50280
2598
2599
2600
    [6505067, 6505272, 6506500, 6506510] -- Test failure: test_ganesha (unittest.loader._FailedTest) -- tracked by https://tracker.ceph.com/issues/53123
2601
2602
2603
    6505172, 6505376, 6506503, 6506514 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in recent master baseline run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-13_15:31:06-rados-wip-yuriw-master-11.12.21-distro-basic-smithi/6501542/
2604
2605
2606
    [6505216, 6505420, 6506507, 6506518] -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus) -- could be related to this https://tracker.ceph.com/issues/38048; seen in recent master baseline run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-13_15:31:06-rados-wip-yuriw-master-11.12.21-distro-basic-smithi/6501586/
2607 4 Laura Flores
2608
h3. wip-yuri7-testing-2021-11-01-1748
2609
2610
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-03_15:48:02-rados-wip-yuri7-testing-2021-11-01-1748-distro-basic-smithi/
2611
2612
Failures related:
2613
2614
    6481640 -- Command failed (workunit test rados/test_dedup_tool.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' -- related to #43481; also seen in a previous run: seen in a previous run: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_17:42:41-rados-wip-yuri7-testing-2021-10-28-1307-distro-basic-smithi/6467499/
2615
2616
2617
Failures unrelated, tracked in:
2618
2619
    [6481465, 6481610, 6481724, 6481690] -- Command failed on smithi188 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3718dce-3da4-11ec-8c28-001a4aab830c -- ceph mon dump -f json' -- tracked in https://tracker.ceph.com/issues/50280; also seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_23:44:51-rados-wip-yuri-master-10.29.21-distro-basic-smithi/6468420/
2620
2621
     
2622
2623
    6481477 -- Test failure: test_ganesha (unittest.loader._FailedTest) -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
2624
2625
     
2626
2627
    [6481503, 6481528] -- Command failed (workunit test rados/test.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' -- tracked in https://tracker.ceph.com/issues/40926; seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
2628
2629
     
2630
2631
    6481538 -- Command failed on smithi063 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29bac0d4-3f81-11ec-8c28-001a4aab830c -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi063:172.21.15.63=smithi063'" -- tracked by https://tracker.ceph.com/issues/50280
2632
2633
     
2634
2635
    6481583 -- reached maximum tries (800) after waiting for 4800 seconds -- tracked by https://tracker.ceph.com/issues/51576
2636
2637
     
2638
2639
    6481608 -- Command failed on smithi174 with status 1: 'sudo fuser -v /var/lib/dpkg/lock-frontend' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-07_14:27:05-upgrade-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6490149/
2640
2641
     
2642
2643
    6481624 -- Found coredumps on ubuntu@smithi080.front.sepia.ceph.com -- tracked by https://tracker.ceph.com/issues/53206; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
2644
2645
     
2646
2647
    6481677 -- Test failure: test_access_permissions (tasks.mgr.dashboard.test_cephfs.CephfsTest) -- tracked by https://tracker.ceph.com/issues/41949
2648
2649
     
2650
2651
    6481727 -- Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) -- tracked by https://tracker.ceph.com/issues/52652
2652
2653
     
2654
2655
    6481755 -- Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 704120ac-3f9f-11ec-8c28-001a4aab830c -- ceph osd stat -f json' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488543/
2656
2657
     
2658
2659
    [6481580, 6481777] -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488604/
2660
2661
2662
    [6481449, 6481541, 6481589, 6481639, 6481686, 6481741, 6481785] hit max job timeout -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
2663
2664
2665
    [6481819] -- similar to https://tracker.ceph.com/issues/46063
2666
2667
2668
2669
Details:
2670
2671
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2672
2673
    Bug_#40926: "Command failed (workunit test rados/test.sh)" in rados - Ceph
2674
2675
    Bug_#51576: qa/tasks/radosbench.py times out - Ceph - RADOS
2676
2677
    Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology
2678
2679
    Bug_#41949: test_access_permissions fails in tasks.mgr.dashboard.test_cephfs.CephfsTest - Ceph - Mgr - Dashboard
2680
2681
    Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
2682
2683
    Bug_#46063: Could not find the requested service nrpe - Tools - Teuthology
2684
2685
h3. wip-yuri7-testing-2021-11-01-1748
2686
2687
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-03_15:48:02-rados-wip-yuri7-testing-2021-11-01-1748-distro-basic-smithi/
2688
2689
Failures related:
2690
2691
    6481640 -- Command failed (workunit test rados/test_dedup_tool.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' -- related to #43481; also seen in a previous run: seen in a previous run: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_17:42:41-rados-wip-yuri7-testing-2021-10-28-1307-distro-basic-smithi/6467499/
2692
2693
2694
Failures unrelated, tracked in:
2695
2696
    [6481465, 6481610, 6481724, 6481690] -- Command failed on smithi188 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3718dce-3da4-11ec-8c28-001a4aab830c -- ceph mon dump -f json' -- tracked in https://tracker.ceph.com/issues/50280; also seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_23:44:51-rados-wip-yuri-master-10.29.21-distro-basic-smithi/6468420/
2697
2698
     
2699
2700
    6481477 -- Test failure: test_ganesha (unittest.loader._FailedTest) -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
2701
2702
     
2703
2704
    [6481503, 6481528] -- Command failed (workunit test rados/test.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' -- tracked in https://tracker.ceph.com/issues/40926; seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
2705
2706
     
2707
2708
    6481538 -- Command failed on smithi063 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29bac0d4-3f81-11ec-8c28-001a4aab830c -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi063:172.21.15.63=smithi063'" -- tracked by https://tracker.ceph.com/issues/50280
2709
2710
     
2711
2712
    6481583 -- reached maximum tries (800) after waiting for 4800 seconds -- tracked by https://tracker.ceph.com/issues/51576
2713
2714
     
2715
2716
    6481608 -- Command failed on smithi174 with status 1: 'sudo fuser -v /var/lib/dpkg/lock-frontend' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-07_14:27:05-upgrade-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6490149/
2717
2718
     
2719
2720
    6481624 -- Found coredumps on ubuntu@smithi080.front.sepia.ceph.com -- tracked by https://tracker.ceph.com/issues/53206; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
2721
2722
     
2723
2724
    6481677 -- Test failure: test_access_permissions (tasks.mgr.dashboard.test_cephfs.CephfsTest) -- tracked by https://tracker.ceph.com/issues/41949
2725
2726
     
2727
2728
    6481727 -- Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) -- tracked by https://tracker.ceph.com/issues/52652
2729
2730
     
2731
2732
    6481755 -- Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 704120ac-3f9f-11ec-8c28-001a4aab830c -- ceph osd stat -f json' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488543/
2733
2734
     
2735
2736
    [6481580, 6481777] -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488604/
2737
2738
2739
    [6481449, 6481541, 6481589, 6481639, 6481686, 6481741, 6481785] hit max job timeout -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
2740
2741
2742
    [6481819] -- similar to https://tracker.ceph.com/issues/46063
2743
2744
2745
2746
Details:
2747
2748
    Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
2749
2750
    Bug_#40926: "Command failed (workunit test rados/test.sh)" in rados - Ceph
2751
2752
    Bug_#51576: qa/tasks/radosbench.py times out - Ceph - RADOS
2753
2754
    Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology
2755
2756
    Bug_#41949: test_access_permissions fails in tasks.mgr.dashboard.test_cephfs.CephfsTest - Ceph - Mgr - Dashboard
2757
2758
    Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
2759
2760
    Bug_#46063: Could not find the requested service nrpe - Tools - Teuthology
2761 3 Laura Flores
2762 2 Laura Flores
h3. wip-yuri-testing-2021-11-04-0731
2763
2764
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:14:19-rados-wip-yuri-testing-2021-11-04-0731-distro-basic-smithi/
2765
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-04_20:25:46-rados-wip-yuri-testing-2021-11-04-0731-distro-basic-smithi/
2766
2767
2768
Failures unrelated, tracked in:
2769
2770
    [6485385, 6485585, 6491076, 6491095] Test failure: test_ganesha (unittest.loader._FailedTest)
2771
2772
    -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
2773
2774
2775
    [6485484] Could not reconnect to ubuntu@smithi072.front.sepia.ceph.com
2776
2777
    -- potentially related to https://tracker.ceph.com/issues/21317, but also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488467/
2778
2779
2780
    [6485488] Command failed on smithi072 with status 1: 'sudo yum install -y kernel'
2781
2782
    -- https://tracker.ceph.com/issues/37657
2783
2784
2785
    [6485616] timeout expired in wait_until_healthy
2786
2787
    -- potentially related to https://tracker.ceph.com/issues/45701; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488669/
2788
2789
2790
    [6485669] Command failed (workunit test rados/test.sh) on smithi072 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'
2791
2792
    -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
2793
2794
2795
    [6491087, 6491102, 6485685] Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
2796
2797
    -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488803/
2798
2799
2800
    [6485698] Command failed (workunit test osd/osd-rep-recov-eio.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-rep-recov-eio.sh'
2801
2802
    -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/
2803
2804
2805
    [6485733] Found coredumps on ubuntu@smithi038.front.sepia.ceph.com
2806
2807
    -- seen in in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
2808
2809
2810
    [6485487, 6485492] SSH connection to smithi072 was lost: 'rpm -q kernel --last | head -n 1'
2811
2812
    -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488521/
2813
2814
2815
    [6485497, 6485547, 6485594, 6485649, 6485693] hit max job timeout
2816
2817
    -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
2818
2819
2820
Failure untracked, but likely not related:
2821
2822
2823
    [6485465, 6485471] 'get_status smithi050.front.sepia.ceph.com' reached maximum tries (10) after waiting for 32.5 seconds
2824
2825
    -- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488581/; http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488587/
2826
2827
2828
    [6485513] machine smithi072.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_yuriw@teuthology
2829
2830
    -- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488630/
2831
2832
2833
    [6485589] Command failed (workunit test cls/test_cls_lock.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh'
2834
2835
    -- same test passed in recent master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488706/; however, failed in a past master run: http://pulpito.front.sepia.ceph.com/teuthology-2021-09-26_07:01:03-rados-master-distro-basic-gibba/6408301/
2836
2837
2838
    [6485485] Error reimaging machines: 500 Server Error: Internal Server Error for url: http://fog.front.sepia.ceph.com/fog/host/191/task
2839
2840
    -- similar test passed in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488601/
2841
2842
2843
    [6485495] {'smithi072.front.sepia.ceph.com': {'changed': False, 'msg': 'Data could not be sent to remote host "smithi072.front.sepia.ceph.com". Make sure this host can be reached over ssh: Warning: Permanently added \'smithi072.front.sepia.ceph.com,172.21.15.72\' (ECDSA) to the list of known hosts.\r\nubuntu@smithi072.front.sepia.ceph.com: Permission denied (publickey,password,keyboard-interactive).\r\n', 'unreachable': True}}
2844
2845
    -- similar test passed in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488611/
2846
2847
2848
    [6485498] {'Failure object was': {'smithi072.front.sepia.ceph.com': {'msg': 'non-zero return code', 'cmd': ['semodule', '-i', '/tmp/nrpe.pp'], 'stdout': '', 'stderr': 'libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).\\nsemodule:  Failed on /tmp/nrpe.pp!', 'rc': 1, 'start': '2021-11-05 14:21:58.104549', 'end': '2021-11-05 14:22:03.111651', 'delta': '0:00:05.007102', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'semodule -i /tmp/nrpe.pp', 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': 'None', 'chdir': 'None', 'executable': 'None', 'creates': 'None', 'removes': 'None', 'stdin': 'None'}}, 'stdout_lines': [], 'stderr_lines': ['libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).', 'semodule:  Failed on /tmp/nrpe.pp!'], '_ansible_no_log': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c49
2849
2850
98b99d2e4/virtualenv/lib/pyth
2851
on3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'changed')"}
2852
2853
    -- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:
2854
2855
01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488614/
2856
2857
    [6485665] Error reimaging machines: Failed to power on smithi038
2858
2859
    -- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488783/
2860
2861
2862
2863
Details:
2864
    Bug #21317: Update VPS with latest distro: RuntimeError: Could not reconnect to ubuntu@vpm129.front.sepia.ceph.com - Infrastructure - Sepia
2865
    Bug #37657: Command failed on smithi075 with status 1: 'sudo yum install -y kernel' - Ceph
2866
    Bug #45701: rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check - Ceph - Orchestrator
2867 3 Laura Flores
2868 2 Laura Flores
h3. wip-pg-stats
2869
 
2870
http://pulpito.front.sepia.ceph.com/lflores-2021-11-08_21:48:32-rados-wip-pg-stats-distro-default-smithi/ 
2871
http://pulpito.front.sepia.ceph.com/lflores-2021-11-08_19:58:04-rados-wip-pg-stats-distro-default-smithi/ 
2872
2873
Failures unrelated, tracked in:
2874
2875
    [6492777, 6492791, 6491519, 6491534] -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
2876
2877
    [6492779, 6492789, 6491522] -- tracked by https://tracker.ceph.com/issues/53206; seen in in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
2878
2879
    [6492785, 6492798, 6491527, 6491540] -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488803/
2880
2881
2882
Details:
2883
2884 1 Laura Flores
    Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology