Project

General

Profile

Actions

REEF » History » Revision 47

« Previous | Revision 47/51 (diff) | Next »
Pere Díaz Bou, 04/02/2024 02:49 PM


REEF

Summaries are ordered latest --> oldest.

https://tracker.ceph.com/issues/65159

Failures, unrelated:

1. https://tracker.ceph.com/issues/58917
2. https://tracker.ceph.com/issues/56500
3. https://tracker.ceph.com/issues/58523
4. https://tracker.ceph.com/issues/62136

Details:

1. Package 'python-dev' has no installation candidate
2. assert False, 'failed meta checkpoint for zone=%s' % zone.name
3. bucket checkpoint timed out waiting to reach incremental sync
4. rgw:verify tests are failing with valgrind error
5. FAIL: test pushing kafka s3 notification on master

https://tracker.ceph.com/issues/65202

Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/63121
3. https://tracker.ceph.com/issues/64208

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. KeyValueDB/KVTest.RocksDB_estimate_size tests failing - Ceph - RADOS
3. test_cephadm.sh: Container version mismatch causes job to fail. - Ceph - Orchestrator

https://tracker.ceph.com/issues/65048

Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/64208
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/63121
5. https://tracker.ceph.com/issues/65128

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. test_cephadm.sh: Container version mismatch causes job to fail. - Ceph - Orchestrator
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. KeyValueDB/KVTest.RocksDB_estimate_size tests failing - Ceph - RADOS
5. Node-exporter deployment fails due to missing container - Ceph - Orchestrator

https://trello.com/c/xEeVJoco/1978-wip-yuri-testing-2024-03-12-1240-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/62776
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/63559
4. https://tracker.ceph.com/issues/64208
5. https://tracker.ceph.com/issues/63121
6. https://tracker.ceph.com/issues/58907
7. https://tracker.ceph.com/issues/62653

Details:
1. rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
3. reef: Heartbeat crash in osd
4. test_cephadm.sh: Container version mismatch causes job to fail.
5. KeyValueDB/KVTest.RocksDB_estimate_size tests failing - /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598354
6. OCI runtime error: runc: runc create failed: unable to start container process
7. qa: unimplemented fcntl command: 1036 with fsstress

https://trello.com/c/Na1Iamb0/1968-wip-yuri11-testing-2024-02-28-0950-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/62992
3. https://tracker.ceph.com/issues/64208
4. https://tracker.ceph.com/issues/64695 -- new tracker
5. https://tracker.ceph.com/issues/59196

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. Heartbeat crash in reset_timeout and clear_timeout - Ceph - RADOS
3. test_cephadm.sh: Container version mismatch causes job to fail. - Ceph - Orchestrator
4. Aborted signal starting in AsyncConnection::send_message() - Ceph - RADOS
5. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS

https://trello.com/c/9W0yekx0/1959-wip-yuri2-testing-2024-02-16-0829-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/62992
2. https://tracker.ceph.com/issues/64208
3. https://tracker.ceph.com/issues/64670
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/61774

Details:
1. Heartbeat crash in reset_timeout and clear_timeout - Ceph - RADOS
2. test_cephadm.sh: Container version mismatch causes job to fail. - Ceph - Orchestrator
3. LibRadosAioEC.RoundTrip2 hang and pkill - Ceph - RADOS
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
5. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS

https://trello.com/c/VQmIA4Tu/1964-wip-yuri8-testing-2024-02-22-0734-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/64208
3. https://tracker.ceph.com/issues/64637 -- new tracker
4. https://tracker.ceph.com/issues/62992
5. https://tracker.ceph.com/issues/52657

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. test_cephadm.sh: Container version mismatch causes job to fail. - Ceph - Orchestrator
3. LeakPossiblyLost in BlueStore::_do_write_small() in osd - Ceph - RADOS
4. Heartbeat crash in reset_timeout and clear_timeout - Ceph - RADOS
5. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - RADOS

https://trello.com/c/UYlh0KYN/1953-wip-yuri2-testing-2024-02-12-0808-reef

Failures, unrelated

1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/63941
3. https://tracker.ceph.com/issues/64208
4. https://tracker.ceph.com/issues/62992
5. https://tracker.ceph.com/issues/59196

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
2. rbd/test_librbd_python.sh test failures
3. test_cephadm.sh: Container version mismatch causes job to fail
4. Heartbeat crash in reset_timeout and clear_timeout
5. ceph_test_lazy_omap_stats segfault while waiting for active+clean

https://trello.com/c/pc17d4LG/1940-wip-yuri2-testing-2024-01-25-1327-reef

Failures, unrelated

1. https://tracker.ceph.com/issues/63941
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/64208 -- New tracker (Infra)
4. https://tracker.ceph.com/issues/64214 -- New tracker (Duplicate of https://tracker.ceph.com/issues/61774)
5. https://tracker.ceph.com/issues/52562
6. https://tracker.ceph.com/issues/62776
7. https://tracker.ceph.com/issues/58800

Details:
1. rbd/test_librbd_python.sh test failures
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
3. test_cephadm.sh: Container version mismatch causes job to fail.
4. Health check failed: 1 osds down (OSD_DOWN) in cluster logs.
5. Thrashosds read error injection failed with error ENXIO
6. rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled
7. ansible: Failed to update apt cache: unknown reason

https://trello.com/c/LwEqRMyO/1932-wip-yuri-testing-2024-01-21-0805-reef-old-wip-yuri-testing-2024-01-18-0746-reef

Failures, unrelated:

7528677, 7528748, 7528951 - Heartbeat crash - Backport #63559: reef: Heartbeat crash in osd - RADOS - Ceph
7528714 - rbd/test_librbd_python.sh test failures - Bug #63941: rbd/test_librbd_python.sh test failures - rbd - Ceph
7528755 - do not have an application enabled - Bug #62776: rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled - RADOS - Ceph
7528813, 7528814, 7528882 - centos 9 "Leak_StillReachable" - Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - RADOS - Ceph

Infra:
7528792, 7528935 - Error: Container release squid != cephadm release reef;
7528866 - Error authenticating with smithi
7528752, 7528754, 7528775 - Error reimaging machines
7528777 - SSH connection to smithi196 was lost

https://trello.com/c/VWxiFmmG/1928-wip-yuri11-testing-2024-01-12-1402-reef-old-wip-yuri11-testing-2024-01-12-0739-reef

Failures, unrelated:

1. https://tracker.ceph.com/issues/62992
2. https://tracker.ceph.com/issues/59196
3. https://tracker.ceph.com/issues/63941
4. https://tracker.ceph.com/issues/62119
5. https://tracker.ceph.com/issues/61774

Details:
1. Heartbeat crash in reset_timeout and clear_timeout
2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
3. rbd/test_librbd_python.sh test failures
4. timeout on reserving replicsa - Ceph - RADOS
5. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS

https://trello.com/c/SsUeosPm/1899-wip-yuri4-testing-2023-12-04-1129-reef

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-12-04-1129-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/62777
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59142
4. https://tracker.ceph.com/issues/52155
5. https://tracker.ceph.com/issues/63748
6. https://tracker.ceph.com/issues/44595
7. https://tracker.ceph.com/issues/49287
8. https://tracker.ceph.com/issues/59196
9. https://tracker.ceph.com/issues/62992

Details:
1. rados/valgrind-leaks: expected valgrind issues and found none - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
4. crash: pthread_rwlock_rdlock() in queue_want_up_thru - Ceph - RADOS
5. qa/workunits/post-file.sh: Couldn't create directory - Ceph
6. cache tiering: Error: oid 48 copy_from 493 returned error code 2 - Ceph - RADOS
7. podman: setting cgroup config for procHooks process caused: Unit libpod
$hash.scope not found - Ceph - Orchestrator
8. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
9. Heartbeat crash in osd - Ceph - RADOS

18.2.1 Review

Also relevant to this release is https://tracker.ceph.com/issues/63389. At this time, we are evaluating a fix, but have not deemed this bug a blocker.

Tracked in https://tracker.ceph.com/issues/63443#note-1

https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-11-06_15:47:31-rados-reef-release-distro-default-smithi

Failures:
1. https://tracker.ceph.com/issues/62992
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/59142
5. https://tracker.ceph.com/issues/62119
6. https://tracker.ceph.com/issues/47589
7. https://tracker.ceph.com/issues/63501

Details:
1. Heartbeat crash in osd - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
5. timeout on reserving replicsa - Ceph - RADOS
6. radosbench times out "reached maximum tries (800) after waiting for 4800 seconds" - Ceph - RADOS
7. ceph::common::leak_some_memory() got interpreted as an actual leak - Ceph - RADOS

https://trello.com/c/FCaRadfR/1881-wip-yuri6-testing-2023-11-01-0745-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/58130
2. https://tracker.ceph.com/issues/63433
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/62992
5. https://tracker.ceph.com/issues/61774
6. https://tracker.ceph.com/issues/59142
7. https://tracker.ceph.com/issues/62776
8. https://tracker.ceph.com/issues/44510

Details:
1. LibRadosAio.SimpleWrite hang and pkill - Ceph - RADOS
2. sqlite3.IntegrityError: UNIQUE constraint failed: DeviceHealthMetrics.time, DeviceHealthMetrics.devid - Ceph - Cephsqlite
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. Heartbeat crash in osd - Ceph - RADOS
5. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
6. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
7. rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled - Ceph - RADOS
8. osd/osd-recovery-space.sh TEST_recovery_test_simple failure - Ceph - RADOS

https://trello.com/c/BtScVoyn/1879-wip-yuri-testing-2023-11-01-1538-reef-old-wip-yuri-testing-2023-11-01-0928-reef-old-wip-yuri-testing-2023-10-31-1117-reef-old-wi

https://pulpito.ceph.com/yuriw-2023-11-02_14:18:00-rados-wip-yuri-testing-2023-11-01-1538-reef-distro-default-smithi/

Failures, Unrelated:

1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/63198
3. https://tracker.ceph.com/issues/62713 (Infra)
4. https://tracker.ceph.com/issues/59142

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. ceph_manager.ceph:waiting for clean timed out
3. snc && sudo umount -f /var/lib/ceph/osd/ceph
4. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard

https://trello.com/c/T9bt0Ezq/1871-wip-yuri4-testing-2023-10-31-1447-old-wip-yuri4-testing-2023-10-30-1117-old-wip-yuri4-testing-2023-10-27-1200-old-wip-yuri4-test

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-10-30-1117
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-10-27-1200

Failures, unrelated:
1. https://tracker.ceph.com/issues/50245
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59380
4. https://tracker.ceph.com/issues/62449
5. https://tracker.ceph.com/issues/59142
6. https://tracker.ceph.com/issues/59196
7. https://tracker.ceph.com/issues/62535

Details:
1. TEST_recovery_scrub_2: Not enough recovery started simultaneously - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
4. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7. cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator

https://trello.com/c/aYu9acRS/1867-wip-yuri-testing-2023-10-18-0812-reef-old-wip-yuri-testing-2023-10-16-1247-reef

https://pulpito.ceph.com/yuriw-2023-10-16_21:58:41-rados-wip-yuri-testing-2023-10-16-1247-reef-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/53420
2. https://tracker.ceph.com/issues/62992
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/61786
5. https://tracker.ceph.com/issues/62557

Details:
1. ansible: Unable to acquire the dpkg frontend
2. Heartbeat crash in osd
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
4. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version
5. Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning

https://trello.com/c/i7Du8QOa/1853-wip-yuri7-testing-2023-10-04-1350-reef

https://pulpito.ceph.com/yuriw-2023-10-06_22:29:11-rados-wip-yuri7-testing-2023-10-04-1350-reef-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/61786
4. https://tracker.ceph.com/issues/62992

Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
3. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version
4. Heartbeat crash in osd

Reef v18.2.0

Failures, unrelated:
1. https://tracker.ceph.com/issues/61161
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/58946
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/52657
6. https://tracker.ceph.com/issues/59172
7. https://tracker.ceph.com/issues/55347

Details:
1. Creating volume group 'vg_nvme' failed - Ceph - Ceph-Ansible
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
5. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
6. test_pool_min_size: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
7. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure

Reef RC v18.1.3

https://pulpito.ceph.com/?branch=reef-release

Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/55347

Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure

https://trello.com/c/yN77W25X/1808-wip-yuri2-testing-2023-07-19-1312-reef

https://pulpito.ceph.com/yuriw-2023-07-20_16:07:02-rados-wip-yuri2-testing-2023-07-19-1312-reef-distro-default-smithi

Failures:

7345102 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'
7345134 -> https://tracker.ceph.com/issues/58223 -> failure on `sudo fuser -v /var/lib/dpkg/lock-frontend`
7345233 -> https://tracker.ceph.com/issues/55347 -> SELinux Denials during cephadm/workunits/test_cephadm
7345255 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'

Dead Jobs:

7345005 -> https://tracker.ceph.com/issues/58497 -> paramiko error (Infra issue)
7345277 -> https://tracker.ceph.com/issues/37660 -> {'Failing rest of playbook due to missing NVMe card'}

https://trello.com/c/SHIyJJdq/1806-wip-yuri6-testing-2023-07-17-0838-reef

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-07-17-0838-reef

5 Unrelated failures:
7342910 - <https://tracker.ceph.com/issues/58946>
7342927 - <https://tracker.ceph.com/issues/59172>
7342958 - <https://tracker.ceph.com/issues/55347>
7342961 - <https://tracker.ceph.com/issues/58946>
7342962 - <https://tracker.ceph.com/issues/56034> (Fixed in main, needs R backport)

https://trello.com/c/VwWMUkId/1794-wip-yuri11-testing-2023-06-27-0812-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri11-testing-2023-06-27-0812-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/58560

Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure

https://trello.com/c/h3UadXdd/1795-wip-yuri6-testing-2023-06-28-0739-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2023-06-28-0739-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59057
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/55347

Details:
1. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
3. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure

https://trello.com/c/3Ltfnbqb/1790-wip-yuri3-testing-2023-06-22-0812-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2023-06-22-0812-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/59057
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/61850 -- new tracker
5. https://tracker.ceph.com/issues/57755
6. https://tracker.ceph.com/issues/58946

Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
3. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
4. LibRadosWatchNotify.AioNotify: malloc(): unaligned tcache chunk detected - Ceph - RADOS
5. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
6. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator

https://trello.com/c/M6wwslJh/1784-wip-yuri8-testing-2023-06-12-1236-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2023-06-12-1236-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/61225
3. https://tracker.ceph.com/issues/59192
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/59057

Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. TestClsRbd.mirror_snapshot failure - Ceph - RBD
3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RBD
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS

https://trello.com/c/GBYZSCgE/1776-wip-yuri8-testing-2023-06-06-0830-reef-old-wip-yuri8-testing-2023-06-05-1505-reef-old-wip-yuri8-testing-2023-06-04-0746-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2023-06-06-0830-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/50222
2. https://tracker.ceph.com/issues/59057
3. https://tracker.ceph.com/issues/61578
4. https://tracker.ceph.com/issues/58224
5. https://tracker.ceph.com/issues/61225

Details:
1. osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
2. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
3. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard
4. cephadm/test_repos.sh: urllib.error.HTTPError: HTTP Error 504: Gateway Timeout - Ceph - Orchestrator
5. TestClsRbd.mirror_snapshot failure - Ceph - RBD

https://trello.com/c/Gr98ykFy/1768-wip-yuri4-testing-2023-05-30-0825-reef

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-05-30-0825-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/61519
2. https://tracker.ceph.com/issues/57754
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/59192

Details:
1. mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
2. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS

https://trello.com/c/peDC5BgN/1772-wip-yuri3-testing-2023-05-31-0931-reef

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-05-31-0931-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/57892
2. https://tracker.ceph.com/issues/61225
3. https://tracker.ceph.com/issues/59057
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/43863
6. https://tracker.ceph.com/issues/61519
7. https://tracker.ceph.com/issues/59284

Details:
1. sudo dd of=/home/ubuntu/cephtest/valgrind.supp failure during node setup phase - Tools - Teuthology
2. TestClsRbd.mirror_snapshot failure - Ceph - RBD
3. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
4. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
5. mkdir: cannot create directory ‘/home/ubuntu/cephtest/archive/audit’: File exists - Tools - Teuthology
6. mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
7. Missing `/home/ubuntu/cephtest/archive/coredump` file or directory - Tools - Teuthology

Reef RC0

https://pulpito.ceph.com/?sha1=be098f4642e7d4bbdc3f418c5ad703e23d1e9fe0

Failures, unrelated:
1. https://tracker.ceph.com/issues/61402
2. https://tracker.ceph.com/issues/58969
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/59057
5. https://tracker.ceph.com/issues/59333
6. https://tracker.ceph.com/issues/61225

Details:
1. test_dashboard_e2e.sh: AssertionError: Timed out retrying after 120000ms: Expected to find content: '/^smithi160$/' within the selector: 'datatable-body-row datatable-body-cell:nth-child(2)' but never did. - Ceph - Mgr - Dashboard
2. test_full_health: _ValError: In `input['fs_map']['filesystems']0['mdsmap']`: missing keys: {'max_xattr_size'} - Ceph - Mgr - Dashboard
3. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
5. PgScrubber: timeout on reserving replicas - Ceph - RADOS
6. TestClsRbd.mirror_snapshot failure - Ceph - RBD

https://trello.com/c/EugGvm8i/1767-wip-yuri3-testing-2023-05-26-1329-reef

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-05-26-1329-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/57755
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/57754
5. https://tracker.ceph.com/issues/59196

Details:
1. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
5. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS

Remaining failures are due to :
- apt-get could not get lock /var/lib/apt/lists/lock
- The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed

https://trello.com/c/02a3DzrI/1760-wip-yuri6-testing-2023-05-23-0757-reef

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2023-05-23-0757-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61225
3. https://tracker.ceph.com/issues/61401 -- new tracker
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/58585
6. https://tracker.ceph.com/issues/58560
7. https://tracker.ceph.com/issues/53345
8. https://tracker.ceph.com/issues/49888

Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. TestClsRbd.mirror_snapshot failure - Ceph - RBD
3. AssertionError: machine smithixxx.front.sepia.ceph.com is not locked - Tools - Teuthology
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
5. rook: failed to pull kubelet image - Ceph - Orchestrator
6. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
7. Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
8. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
9. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS

https://trello.com/c/3wFHWku3/1758-wip-yuri-testing-2023-05-23-0909-reef-old-wip-yuri-testing-2023-05-22-0845-reef

https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-05-22-0845-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/57755
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/61384 -- new tracker
5. https://tracker.ceph.com/issues/58223
6. https://tracker.ceph.com/issues/57754
7. https://tracker.ceph.com/issues/61385 -- new tracker
8. https://tracker.ceph.com/issues/58946
9. https://tracker.ceph.com/issues/59196
10. https://tracker.ceph.com/issues/61225
11. https://tracker.ceph.com/issues/61386 -- new tracker

Details:
1. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. EOF during negotiation - Infrastructure
5. failure on `sudo fuser -v /var/lib/dpkg/lock-frontend` - Infrastructure
6. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
7. TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected: true # in actual: false" - Ceph - RADOS
8. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
9. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
10. TestClsRbd.mirror_snapshot failure - Ceph - RBD
11. TEST_recovery_scrub_2: TEST FAILED WITH 1 ERRORS - Ceph - RADOS

https://trello.com/c/SQrPoAfN/1757-wip-yuri6-testing-2023-05-19-1351-reef

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-05-19-1351-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/57755
5. https://tracker.ceph.com/issues/59678
6. https://tracker.ceph.com/issues/59196

Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
5. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS

https://trello.com/c/A5LEquRG/1735-wip-yuri6-testing-2023-04-26-1247-reef-old-wip-yuri6-testing-2023-04-12-0732-reef

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-04-26-1247-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59049
2. https://tracker.ceph.com/issues/59057
3. https://tracker.ceph.com/issues/59142
4. https://tracker.ceph.com/issues/58797

Details:
1. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
2. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
4. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS

https://trello.com/c/3CBilFA1/1736-wip-yuri10-testing-2023-04-18-0735-reef-old-wip-yuri10-testing-2023-04-12-1024-reef

https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-04-18-0735-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59049
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/59142
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/59057
6. https://tracker.ceph.com/issues/57771
7. https://tracker.ceph.com/issues/57755

Details:
1. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
6. orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails - Ceph - Orchestrator
7. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator

https://trello.com/c/hOrKi91l/1713-wip-yuri-testing-2023-03-14-0714-reef-old-wip-yuri-testing-2023-03-13-1318-reef

https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-03-14-0714-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/59123
2. https://tracker.ceph.com/issues/56393
3. https://tracker.ceph.com/issues/59281
4. https://tracker.ceph.com/issues/59282
5. https://tracker.ceph.com/issues/51964
6. https://tracker.ceph.com/issues/59284
7. https://tracker.ceph.com/issues/56445
8. https://tracker.ceph.com/issues/58130
9. https://tracker.ceph.com/issues/59057
10. https://tracker.ceph.com/issues/59285
11. https://tracker.ceph.com/issues/59286
12. https://tracker.ceph.com/issues/58560

Details:
1. Timeout opening channel - Infrastructure
2. failed to complete snap trimming before timeout - Ceph - RADOS
3. JSONDecodeError: Expecting property name enclosed in double quotes - Ceph - Mgr
4. OSError: [Errno 107] Transport endpoint is not connected - Infrastructure
5. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
6. Missing `/home/ubuntu/cephtest/archive/coredump` file or directory - Tools - Teuthology
7. Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog name '*.log' -print0 | sudo xargs -0 --no-run-if-empty - gzip --" - Tools - Teuthology
8. LibRadosAio.SimpleWrite hang and pkill - Ceph - RADOS
9. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
10. mon/mon-last-epoch-clean.sh: TEST_mon_last_clean_epoch failure due to stuck pgs - Ceph - RADOS
11. mon/test_mon_osdmap_prune.sh: test times out after 5+ hours - Ceph - RADOS
12. rook: failed to pull kubelet image - Ceph - Orchestrator

https://trello.com/c/V54yv1s0/1709-wip-yuri4-testing-2023-03-09-1458-reef

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-09-1458-reef

Failures, unrelated:
1. https://tracker.ceph.com/issues/57771
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58744
4. https://tracker.ceph.com/issues/59057
5. https://tracker.ceph.com/issues/59049
6. https://tracker.ceph.com/issues/58969
7. https://tracker.ceph.com/issues/57755

Details:
1. orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
5. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
6. test_full_health: _ValError: In `input['fs_map']['filesystems']0['mdsmap']`: missing keys: {'max_xattr_size'} - Ceph - Mgr - Dashboard
7. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
8. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator

Reef baseline

https://shaman.ceph.com/builds/ceph/reef/33b4b31b1b99e4ff173679b906b23158701d6c0b/
https://pulpito.ceph.com/?sha1=33b4b31b1b99e4ff173679b906b23158701d6c0b

Failures:
1. https://tracker.ceph.com/issues/58496
2. https://tracker.ceph.com/issues/47838
3. https://tracker.ceph.com/issues/57755
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/57771
7. https://tracker.ceph.com/issues/58925
8. https://tracker.ceph.com/issues/58585
9. https://tracker.ceph.com/issues/58926

Details:
1. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
2. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
3. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
6. orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails - Ceph - Orchestrator
7. rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
8. rook: failed to pull kubelet image - Ceph - Orchestrator
9. _rpm.error: package not installed - Infrastructure

Updated by Pere Díaz Bou about 1 month ago · 47 revisions