QUINCY » History » Version 39
« Previous -
Version 39/50
(diff) -
Next » -
Current version
Nitzan Mordechai, 08/13/2023 06:40 AM
QUINCY¶
Summaries are ordered latest --> oldest.
https://trello.com/c/ZjPC9CcN/1820-wip-yuri5-testing-2023-08-08-0807-quincy¶
https://pulpito-ng.ceph.com/?branch=wip-yuri5-testing-2023-08-08-0807-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/61786
3. https://tracker.ceph.com/issues/48502
Details:
1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard
3. ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) - Ceph - CephFS
https://trello.com/c/w1wxAcJO/1814-wip-yuri8-testing-2023-07-24-0819-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-07-24-0819-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58476
2. https://tracker.ceph.com/issues/61786
3. https://tracker.ceph.com/issues/49287
4. https://tracker.ceph.com/issues/62401 -- new tracker
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/61570
Details:
1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard
3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
4. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore
5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
6. pg_autoscaler warns that a pool has too many pgs when it has the exact right amount - Ceph - Mgr
https://trello.com/c/UFKABPAT/1817-wip-yuri7-testing-2023-07-27-1336-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-27-1336-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/61786
3. https://tracker.ceph.com/issues/58476
4. https://tracker.ceph.com/issues/61897
Details:
1.test_envlibrados_for_rocksdb.sh failed to subscribe to repo
2.test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr – Dashboard
3.test_non_existent_cluster: cluster does not exist - Ceph – Orchestrator
4.qa: rados:mgr fails with MDS_CLIENTS_LAGGY
https://trello.com/c/iQLrAz6r/1791-wip-yuri6-testing-2023-06-22-1005-quincy-old-wip-yuri6-testing-2023-06-22-0827-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2023-06-22-1005-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/61786 -- new tracker
3. https://tracker.ceph.com/issues/58476
4. https://tracker.ceph.com/issues/54603
5. https://tracker.ceph.com/issues/53575
6. https://tracker.ceph.com/issues/61787 -- new tracker
Details:
1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard
3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
4. Valgrind reports memory "Leak_IndirectlyLost" errors on ceph-mon in "buffer::ptr_node::create_hypercombined". - Ceph - RADOS
5. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
6. Command "ceph --cluster ceph osd dump --format=json" times out when killing OSD - Ceph - RADOS
https://trello.com/c/ja6hN7bU/1769-wip-yuri5-testing-2023-05-30-0828-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-05-30-0828-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58476
2. https://tracker.ceph.com/issues/59678
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/58475
5. https://tracker.ceph.com/issues/61570 -- new tracker
6. https://tracker.ceph.com/issues/49287
Details:
1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
2. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
5. pg_autoscaler warns that a pool has too many pgs when it has the exact right amount - Ceph - Mgr
6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
https://trello.com/c/pkxrazzW/1763-wip-yuri3-testing-2023-05-24-1136-quincy-old-wip-yuri3-testing-2023-05-24-0845-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-05-24-1136-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58476
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58560
4. https://tracker.ceph.com/issues/58587
5. https://tracker.ceph.com/issues/59599
6. https://tracker.ceph.com/issues/58351
7. https://tracker.ceph.com/issues/58475
8. https://tracker.ceph.com/issues/61457 -- new tracker
Details:
1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
4. test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist - Ceph - RADOS
5. osd: cls_refcount unit test failures during upgrade sequence - Ceph - RADOS
6. Module 'devicehealth' has failed: unknown operation - Ceph - Sqlite
7. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
8. PgScrubber: shard blocked on an object for too long - Ceph - RADOS
https://trello.com/c/llcQOKAa/1755-wip-yuri10-testing-2023-05-18-0815-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-05-18-0815-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/59599
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58476
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/58475
6. https://tracker.ceph.com/issues/59678
Details:
1. osd: cls_refcount unit test failures during upgrade sequence - Ceph - RGW
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
6. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
https://trello.com/c/P52gGcRz/1723-wip-aclamk-bs-elastic-shared-blob-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-aclamk-bs-elastic-shared-blob-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58475 -- pending Q backport
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/55142
4. https://tracker.ceph.com/issues/58476
5. https://tracker.ceph.com/issues/58585
6. https://tracker.ceph.com/issues/48502
Details:
1. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - Cephsqlite
4. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
5. rook: failed to pull kubelet image - Ceph - Orchestrator
6. ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) - Ceph - CephFS
https://pulpito.ceph.com/yuriw-2023-03-24_13:25:17-rados-quincy-release-distro-default-smithi/¶
Failures:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/58476
3. https://tracker.ceph.com/issues/58475 -- pending Q backport
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/58585
Details:
1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
2. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. rook: failed to pull kubelet image - Ceph - Orchestrator
https://trello.com/c/pWVAglAx/1718-wip-yuri3-testing-2023-03-22-1123-quincy-old-wip-yuri3-testing-2023-03-17-1235-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-03-22-1123-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/56000
3. https://tracker.ceph.com/issues/58560
4. https://tracker.ceph.com/issues/58475
5. https://tracker.ceph.com/issues/59080
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
5. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
https://trello.com/c/gZIwMQTv/1714-wip-yuri10-testing-2023-03-13-1318-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-03-13-1318-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/49287
4. https://tracker.ceph.com/issues/56000
5. https://tracker.ceph.com/issues/58476
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
5. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
https://trello.com/c/h2Ci11or/1711-wip-yuri8-testing-2023-03-10-0833-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-03-10-0833-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/58476
4. https://tracker.ceph.com/issues/58560
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
https://trello.com/c/kZNe5IOq/1708-wip-yuri5-testing-2023-03-09-0941-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-03-09-0941-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58476
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/58560
4. https://tracker.ceph.com/issues/58744
5. https://tracker.ceph.com/issues/54369
Details:
1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr
3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
5. mon/test_mon_osdmap_prune.sh: jq .osdmap_first_committed 11 -eq 20 - Ceph - RADOS
https://trello.com/c/nuDSMSOR/1703-wip-yuri6-testing-2023-03-07-1336-quincy-old-wip-yuri6-testing-2023-03-06-1200-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-03-07-1336-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58744
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/50042
4. https://tracker.ceph.com/issues/58585
Details:
1. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. rados/test.sh: api_watch_notify failures - Ceph - RADOS
4. rook: failed to pull kubelet image - Ceph - Orchestrator
https://trello.com/c/CnzKAewc/1700-wip-yuri3-testing-2023-03-01-0812-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-03-01-0812-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/49287
2. https://tracker.ceph.com/issues/58146
3. https://tracker.ceph.com/issues/49961
Details:
1. failed to write <pid> to cgroup.procs - Ceph - Orchestrator
2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
3. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS
https://trello.com/c/KUfgqSfy/1694-wip-yuri4-testing-2023-02-22-0817-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-02-22-0817-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58146
3. https://tracker.ceph.com/issues/58837 -- new tracker
4. https://tracker.ceph.com/issues/58915 -- new tracker
5. https://tracker.ceph.com/issues/54750
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
3. mgr/test_progress.py: test_osd_healthy_recovery fails after timeout - Ceph - RADOS
4. map eXX had wrong heartbeat front addr - Ceph - RADOS
5. crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Crashed, PeeringState::PeeringMachine>::my_context): abort - Ceph - RADOS
https://trello.com/c/Ou37fSaW/1690-wip-yuri5-testing-2023-02-17-1400-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-02-17-1400-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/44587
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58146
4. https://tracker.ceph.com/issues/58744
Details:
1. failed to write <pid> to cgroup.procs - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
https://trello.com/c/gP0lPHtn/1687-wip-yuri2-testing-2023-02-09-0842-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-02-09-0842-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58146
2. https://tracker.ceph.com/issues/58585
Details:
1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
https://trello.com/c/HXwVRMzB/1684-wip-yuri3-testing-2023-02-16-0752-quincy-old-wip-yuri3-testing-2023-02-07-0852-quincy-old-wip-yuri3-testing-2023-02-06-1147-quin¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-02-16-0752-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58146
2. https://tracker.ceph.com/issues/58585
Details:
1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
https://trello.com/c/mU9vQKer/1681-wip-yuri-testing-2023-02-06-1155-quincy-old-wip-yuri-testing-2023-02-04-1345-quincy-old-wip-yuri-testing-2023-02-02-0918-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-02-06-1155-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58146
2. https://tracker.ceph.com/issues/58046 -- pending quincy backport
3. https://tracker.ceph.com/issues/56788
4. https://tracker.ceph.com/issues/58739
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/58585
Details:
1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
2. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS
3. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore
4. "Leak_IndirectlyLost" valgrind report on mon.a - Ceph - RADOS
5. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph - RADOS
6. rook: failed to pull kubelet image - Ceph - Orchestrator
https://trello.com/c/6jh0HcBM/1678-wip-yuri7-testing-2023-01-30-1510-quincy-old-wip-yuri7-testing-2023-01-23-1532-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-01-30-1510-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58146
3. https://tracker.ceph.com/issues/58560
4. https://tracker.ceph.com/issues/58265
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
3. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph
4. TestClsRbd.group_snap_list_max_read failure during upgrade/parallel tests - Ceph - RBD
https://trello.com/c/a9Pfks0y/1666-wip-yuri7-testing-2022-12-09-1107-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-12-09-1107-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/58258
2. https://tracker.ceph.com/issues/58046
3. https://tracker.ceph.com/issues/58265 --new tracker; unrelated to PRs in this batch
4. https://tracker.ceph.com/issues/58140
5. https://tracker.ceph.com/issues/58097
6. https://tracker.ceph.com/issues/56000
7. https://tracker.ceph.com/issues/56785
Details:
1. rook: kubelet fails from connection refused - Ceph - Orchestrator
2. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS
3. TestClsRbd.group_snap_list_max_read failure - Ceph - RBD
4. quay.ceph.io/ceph-ci/ceph: manifest unknown - Ceph - Orchestrator
5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Ceph - RADOS
6. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
7. crash: void OSDShard::register_and_wake_split_child(PG*): assert(!slot->waiting_for_split.empty()) - Ceph - RADOS
https://trello.com/c/envBR2ox/1654-wip-yuri5-testing-2022-11-18-1554-quincy-old-wip-yuri5-testing-2022-10-19-1308-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-11-18-1554-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/58146
3. https://tracker.ceph.com/issues/58046
4. https://tracker.ceph.com/issues/58097
5. https://tracker.ceph.com/issues/56000
6. https://tracker.ceph.com/issues/52321
7. https://tracker.ceph.com/issues/57754
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
3. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command
4. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
5. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
7. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
https://trello.com/c/iEU3xOhe/1638-wip-yuri6-testing-2022-09-23-1008-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-09-23-1008-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/57311
3. https://tracker.ceph.com/issues/56951
4. https://tracker.ceph.com/issues/57165 -- pending Quincy backport
Details:
1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. rook: ensure CRDs are installed first - Ceph - Orchestrator
3. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
4. expected valgrind issues and found none - Ceph - RADOS
https://trello.com/c/CWbOkqWR/1626-wip-yuri10-testing-2022-09-04-0811-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-09-04-0811-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/56951
2. https://tracker.ceph.com/issues/57165
3. https://tracker.ceph.com/issues/57368
4. https://tracker.ceph.com/issues/57290 -- pending Quincy backport
5. https://tracker.ceph.com/issues/57386
6. https://tracker.ceph.com/issues/52124 -- pending Quincy backport
7. https://tracker.ceph.com/issues/49524
Details:
1. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
2. expected valgrind issues and found none - Ceph - RADOS
3. The CustomResourceDefinition "installations.operator.tigera.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes - Ceph - RADOS
4. orch/cephadm: task/test_cephadm failure due to: ERROR: A cluster with the same fsid '00000000-0000-0000-0000-0000deadbeef' already exists. - Ceph - Orchestrator
5. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
6. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
7. ceph_test_rados_delete_pools_parallel didn't start - Ceph - RADOS
https://trello.com/c/s9pGC2JL/1630-wip-yuri6-testing-2022-09-08-0859-quincy-old-wip-yuri6-testing-2022-09-06-1353-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-09-06-1353-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57165 -- pending Quincy backport
2. https://tracker.ceph.com/issues/57290 -- pending Quincy backport
3. https://tracker.ceph.com/issues/57269
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/56951
Details:
1. expected valgrind issues and found none - Ceph - RADOS
2. orch/cephadm: task/test_cephadm failure due to: ERROR: A cluster with the same fsid '00000000-0000-0000-0000-0000deadbeef' already exists. - Ceph - Orchestrator
3. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
https://trello.com/c/ZiFOEFfI/1614-wip-yuri-testing-2022-08-23-1120-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-08-23-1120-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57303 -- Pending Quincy backport
2. https://tracker.ceph.com/issues/57269
3. https://tracker.ceph.com/issues/57165 -- Fix under review
4. https://tracker.ceph.com/issues/57270
5. https://tracker.ceph.com/issues/57311
6. https://tracker.ceph.com/issues/49287
Details:
1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
2. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
3. expected valgrind issues and found none - Ceph - RADOS
4. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
5. rook: ensure CRDs are installed first - Ceph - Orchestrator
6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
https://trello.com/c/cU355Dso/1617-wip-yuri3-testing-2022-08-24-0820-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-08-24-0820-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57303 -- Pending Quincy backport
2. https://tracker.ceph.com/issues/57269
3. https://tracker.ceph.com/issues/57165 -- Fix under review
4. https://tracker.ceph.com/issues/57270 -- Pending Quincy backport
5. https://tracker.ceph.com/issues/57311
Details:
1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
2. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
3. expected valgrind issues and found none - Ceph - RADOS
4. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
5. rook: ensure CRDs are installed first - Ceph - Orchestrator
https://trello.com/c/4ng3KVzi/1609-wip-yuri7-testing-2022-08-17-0943-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-08-17-0943-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/57303 -- new Tracker; unrelated to PRs in this run
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/57303
4. https://tracker.ceph.com/issues/56951
5. https://tracker.ceph.com/issues/57270 -- Pending Quincy backport
6. https://tracker.ceph.com/issues/57165 -- Fix under review; pending Quincy backport
7. https://tracker.ceph.com/issues/49287
Details:
1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
3. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
4. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
5. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
6. expected valgrind issues and found none - Ceph - RADOS
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
https://trello.com/c/nApVKz07/1598-wip-yuri8-testing-2022-08-03-1028-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-08-03-1028-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/55809
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/55854
4. https://tracker.ceph.com/issues/55897
5. https://tracker.ceph.com/issues/56951
Details:
1. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
3. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
4. test_nfs: update of export's access type should not trigger NFS service restart - Ceph - CephFS
5. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
https://trello.com/c/vJmeRbjP/1592-wip-yuri7-testing-2022-07-27-0808-quincy¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-07-27-0808-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/56652
3. https://tracker.ceph.com/issues/52124
4. https://tracker.ceph.com/issues/45721
5. https://tracker.ceph.com/issues/55001
6. https://tracker.ceph.com/issues/56951 -- new Tracker; looks unrelated to the PRs in this run.
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
2. cephadm/test_repos.sh: rllib.error.HTTPError: HTTP Error 504: Gateway Timeout - Ceph - Orchestrator
3. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
4. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
5. rados/test.sh: Early exit right after LibRados global tests complete - Ceph - RADOS
6. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator
quincy v17.2.1¶
https://tracker.ceph.com/issues/55974#note-1
Failures:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/56000
3. https://tracker.ceph.com/issues/53685
4. https://tracker.ceph.com/issues/52124
5. https://tracker.ceph.com/issues/55854
6. https://tracker.ceph.com/issues/53789
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
3. Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed. - Ceph - RADOS
4. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
5. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
6. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
https://trello.com/c/pR7udWVA/1559-wip-yuri6-testing-2022-06-16-0651-quincy¶
https://pulpito.ceph.com/yuriw-2022-06-16_16:41:04-rados-wip-yuri6-testing-2022-06-16-0651-quincy-distro-default-smithi
https://pulpito.ceph.com/yuriw-2022-06-17_13:54:27-rados-wip-yuri6-testing-2022-06-16-0651-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/55808
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/53575
4. https://tracker.ceph.com/issues/55741
5. https://tracker.ceph.com/issues/53294
6. https://tracker.ceph.com/issues/55854
7. https://tracker.ceph.com/issues/55986
Details:
1. task/test_nfs: KeyError: 'events' - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
5. rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS
6. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
7. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
quincy-release¶
Failures:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/53685
3. https://tracker.ceph.com/issues/55741
4. https://tracker.ceph.com/issues/56000
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed. - Ceph - RADOS
3. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
https://trello.com/c/oobgk2KP/1553-wip-yuri4-testing-2022-06-09-1510-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-06-09-1510-quincy
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/52124
3. https://tracker.ceph.com/issues/45721
4. https://tracker.ceph.com/issues/55741
5. https://tracker.ceph.com/issues/55001
6. https://tracker.ceph.com/issues/55986
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
3. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
5. rados/test.sh: Early exit right after LibRados global tests complete - Ceph - RADOS
6. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
https://trello.com/c/h3NXlLnF/1543-wip-yuri5-testing-2022-06-02-0825-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:24:42-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-03_20:44:47-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52124
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/46877
4. https://tracker.ceph.com/issues/55741
5. https://tracker.ceph.com/issues/55897 --> opened a new Tracker for this
6. https://tracker.ceph.com/issues/54360
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none : Seen on octopus - Ceph - RADOS
4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
5. test_nfs: update of export's access type should not trigger NFS service restart - Ceph - CephFS
6. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator
https://trello.com/c/JlmHBRyS/1540-wip-yuri3-testing-2022-06-01-1035-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_23:21:01-rados-wip-yuri3-testing-2022-06-01-1035-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_14:59:52-rados-wip-yuri3-testing-2022-06-01-1035-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/55741
3. https://tracker.ceph.com/issues/55838 --> opened a new Tracker for this; it is unrelated to the PR tested in this run.
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
2. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
3. cephadm/osds: Exception with "test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm" - Ceph - Orchestrator
https://trello.com/c/W8AlWvBV/1539-wip-yuri-testing-2022-06-02-0810-quincy-old-wip-yuri-testing-2022-05-31-1642-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_02:28:14-rados-wip-yuri-testing-2022-05-31-1642-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_14:00:31-rados-wip-yuri-testing-2022-05-31-1642-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:23:28-rados-wip-yuri-testing-2022-06-02-0810-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/52652
4. https://tracker.ceph.com/issues/53575
5. https://tracker.ceph.com/issues/55741
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
4. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
https://trello.com/c/MQjZXlbD/1535-wip-yuri2-testing-2022-05-26-1430-quincy-old-wip-yuri2-testing-2022-05-25-1323-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-26_23:23:48-rados-wip-yuri2-testing-2022-05-26-1430-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_13:37:17-rados-wip-yuri2-testing-2022-05-26-1430-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/45721
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
https://trello.com/c/KaPapKB1/1533-wip-yuri4-testing-2022-05-19-0831-quincy-old-wip-yuri4-testing-2022-05-18-1410-quincy¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/52657
3. https://tracker.ceph.com/issues/51076
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
https://trello.com/c/7XXyAioY/1527-wip-yuri-testing-2022-05-10-1027-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-12_22:14:26-rados-wip-yuri-testing-2022-05-10-1027-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-14_14:31:51-rados-wip-yuri-testing-2022-05-10-1027-quincy-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/51076
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
https://trello.com/c/iwEbO83e/1518-wip-yuri-testing-2022-04-27-1456-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-28_14:23:18-rados-wip-yuri-testing-2022-04-27-1456-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-29_20:56:26-rados-wip-yuri-testing-2022-04-27-1456-quincy-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/55559 --> had to open a new one for this. It failed in the first run, but passed in the rerun. Seems unrelated.
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. osd-backfill-stats.sh fails in TEST_backfill_ec_prim_out - Ceph - RADOS
https://trello.com/c/GSpBtbRm/1512-wip-yuri3-testing-2022-04-22-0534-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-22_21:06:04-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-25_14:14:44-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/lflores-2022-04-26_15:57:44-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/54509
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/44595
https://tracker.ceph.com/issues/54329
https://tracker.ceph.com/issues/55443
https://tracker.ceph.com/issues/52657
Details:
1. FAILED ceph_assert due to issue manifest API to the original object - Ceph - RADOS
2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
3. cache tiering: Error: oid 48 copy_from 493 returned error code -2 - Ceph - RADOS
4. test_nfs.py: NFS Ganesha cluster deployment timeout - Ceph
5. "SELinux denials found.." in rados run - Infrastructure
6. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
https://trello.com/c/ZzBTuiz8/1508-wip-yuri-testing-2022-04-13-0703-quincy¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-04-13-0703-quincy
Failures in the initial run were due to infrastructure issues, and therefore unrelated.
All jobs were green in the final re-run.
https://trello.com/c/kb4IFQLu/1506-wip-yuri11-testing-2022-04-11-1138-quincy¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-11_21:21:54-rados-wip-yuri11-testing-2022-04-11-1138-quincy-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-12_15:27:11-rados-wip-yuri11-testing-2022-04-11-1138-quincy-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/52124
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
https://trello.com/c/YCs20uZ7/1503-wip-yuri4-testing-2022-04-05-1720-pacific¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-06_14:02:46-rados-wip-yuri4-testing-2022-04-05-1720-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/lflores-2022-04-07_18:45:23-rados-wip-yuri4-testing-2022-04-05-1720-pacific-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/54086
There were also some selinux denials in several cephadm tests.
Details:
1. Exception when running 'rook' task. - Ceph - Orchestrator
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
https://trello.com/c/wMFylrET/1499-wip-yuri4-testing-2022-03-31-1158-quincy¶
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/54029
https://tracker.ceph.com/issues/49287
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test failing - Ceph - Orchestrator
3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
https://trello.com/c/zDTqMLdh/1486-wip-yuri7-testing-2022-03-23-1332-quincy¶
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/53855
https://tracker.ceph.com/issues/54029
https://tracker.ceph.com/issues/50042
https://trello.com/c/E9Caje20/1465-wip-yuri-testing-2022-02-28-0823-quincy¶
Failures, unrelated:
https://tracker.ceph.com/issues/54029
https://tracker.ceph.com/issues/54439
https://tracker.ceph.com/issues/50280
Details:
1. orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test failing - Ceph - Orchestrator
2. LibRadosWatchNotify.WatchNotify2Multi fails - Ceph - RADOS
3. cephadm: RuntimeError: uid/gid not found - Ceph
https://trello.com/c/3G1ufRuW/1458-wip-yuri11-testing-2022-02-21-0831-quincy¶
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/54337
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. cephadm: RuntimeError: uid/gid not found - Ceph
3. Selinux denials seen on fs/rados teuthology runs - Infrastructure