Project

General

Profile

QUINCY » History » Revision 60

Revision 59 (Laura Flores, 03/04/2024 12:20 AM) → Revision 60/62 (Pere Díaz Bou, 03/06/2024 05:04 PM)

h1. QUINCY 

 Summaries are ordered latest --> oldest. 

 h3. https://trello.com/c/eHj274bp/1967-wip-yuri3-testing-2024-02-28-0755-quincy 

 https://pulpito.ceph.com/yuriw-2024-03-05_22:36:37-rados-wip-yuri3-testing-2024-02-28-0755-quincy-distro-default-smithi/ 

 Failures, unrelated 
 1. https://tracker.ceph.com/issues/64314 
 2. https://tracker.ceph.com/issues/58232 
 3. https://tracker.ceph.com/issues/58223 

 Details: 
 1. Failure: "1709683800.0001702 mon.a (mon.0) 517 : cluster [WRN] overall HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log 
 2. Failure: SSH connection to smithi066 was lost: 'sudo apt-get update' 
 3. Failure: Command failed on smithi190 with status 1: 'sudo fuser -v /var/lib/dpkg/lock-frontend' 

 h3. https://trello.com/c/9YxyBnTn/1966-wip-yuri4-testing-2024-02-27-1111-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/56788 
     2. https://tracker.ceph.com/issues/49287 
     3. https://tracker.ceph.com/issues/63066 

 Details: 
     1. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore 
     2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     3. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 

 h3. https://trello.com/c/q0ZDB0OD/1960-wip-yuri5-testing-2024-02-16-1120-quincy 

 https://pulpito.ceph.com/yuriw-2024-02-26_16:54:59-rados-wip-yuri5-testing-2024-02-16-1120-quincy-distro-default-smithi/ 

 Failures, unrelated: 
 1. https://tracker.ceph.com/issues/63066 

 Details: 
 1. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 

 h3. https://trello.com/c/Yjrx9ygD/1911-wip-yuri8-testing-2023-12-15-0911 

 https://pulpito.ceph.com/?sha1=6441305d08929b8a72c0322a5c268c64b4a99c65 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/63748 
     3. https://tracker.ceph.com/issues/64054 -- new tracker 
     4. https://tracker.ceph.com/issues/63066 
     5. https://tracker.ceph.com/issues/62401 
     6. https://tracker.ceph.com/issues/64056 -- new tracker 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure 
     3. test failure due to HEALTH_ERR: 2 mgr modules have failed - Ceph - Mgr 
     4. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     5. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore 
     6. LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance failure - Ceph - RADOS 

 h3. https://trello.com/c/RA2In4BS/1921-wip-yuri7-testing-2024-01-05-0730-quincy-old-wip-yuri7-testing-2024-01-03-0857-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2024-01-05-0730-quincy 

 Failures, unrelated: 
 1. https://tracker.ceph.com/issues/61578 
 2. https://tracker.ceph.com/issues/58907 
 3. https://tracker.ceph.com/issues/63748 
 4. https://tracker.ceph.com/issues/55606 
 5. https://tracker.ceph.com/issues/63066 

 Details: 
 1. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version 
 2. OCI runtime error: runc: runc create failed: unable to start container process 
 3. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure 
 4. Unhandled exception from module ''devicehealth'' while running on mgr.y: unknown - Ceph - Mgr 
 5. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 

 h3. https://trello.com/c/PbdnG4C3/1918-wip-yuri3-testing-2024-01-02-1236-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2024-01-02-1236-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/63748 
     3. https://tracker.ceph.com/issues/52657 
     4. https://tracker.ceph.com/issues/58476 
     5. https://tracker.ceph.com/issues/63941 
     6. https://tracker.ceph.com/issues/58739 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure 
     3. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS 
     4. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     5. [quincy]rbd/test_librbd_python.sh test failures - Ceph - RBD 
     6. "Leak_IndirectlyLost" valgrind report on mon.a - Ceph - RADOS 

 h3. https://trello.com/c/RA2In4BS/1921-wip-yuri7-testing-2024-01-03-0857-quincy 

 https://pulpito.ceph.com/yuriw-2024-01-03_21:31:38-rados-wip-yuri7-testing-2024-01-03-0857-quincy-distro-default-smithi/ 

 Failures: 
 1. https://tracker.ceph.com/issues/63938    ---> New tracker (seems to be related to PR#54877) 
 2. https://tracker.ceph.com/issues/61578 
 3. https://tracker.ceph.com/issues/63941    ---> New tracker 
 4. https://tracker.ceph.com/issues/63748 
 5. https://tracker.ceph.com/issues/62975 

 Details: 
 1. rados/objectstore - assert in StoreTestSpecificAUSize.BluestoreRepairSharedBlobTest - Ceph - RADOS 
 2. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
 3. rbd/test_librbd_python.sh: test failures - Ceph - RBD 
 4. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure 
 5. site-packages/paramiko/channel.py: OSError: Socket is closed 

 h3. h3. https://trello.com/c/E5nWNGcB/1910-wip-yuri11-testing-2023-12-14-1108-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/63066 
     2. https://tracker.ceph.com/issues/63748 
     3. https://tracker.ceph.com/issues/58476 
     4. https://tracker.ceph.com/issues/61578 
     5. https://tracker.ceph.com/issues/54604 

 Details: 
     1. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure 
     3. test_non_existent_cluster: cluster does not exist - Ceph - CephFS 
     4. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     5. objecter_requests workunit during thrashosds test fails with status 8. - Ceph 

 h3. https://trello.com/c/8IqUu9M0/1886-wip-yuri3-testing-2023-11-09-1355-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-11-09-1355-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58476 
     2. https://tracker.ceph.com/issues/61578 
     3. https://tracker.ceph.com/issues/63066 
     4. https://tracker.ceph.com/issues/55443 
     5. https://tracker.ceph.com/issues/62401 

 Details: 
     1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     3. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     4. "SELinux denials found.." in rados run - Infrastructure 
     5. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore 

 h3. Quincy v17.2.7 validation 

 https://pulpito.ceph.com/yuriw-2023-10-13_20:02:13-rados-quincy-release-distro-default-smithi/ 
 https://pulpito.ceph.com/yuriw-2023-10-15_15:10:04-rados-quincy-release-testing-default-smithi/ 

 Failures: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/58560 
     3. https://tracker.ceph.com/issues/63066 
     4. https://tracker.ceph.com/issues/56000 
     5. https://tracker.ceph.com/issues/58476 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     3. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     5. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 

 h3. https://trello.com/c/LPRzsfat/1864-wip-yuri5-testing-2023-10-11-1125-quincy-old-wip-lflores-testing-2-2023-10-11-1808-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-10-11-1125-quincy 

 Failures: 
     1. https://tracker.ceph.com/issues/63066 
     2. https://tracker.ceph.com/issues/56000 
     3. https://tracker.ceph.com/issues/61578 
     4. https://tracker.ceph.com/issues/58560 

 Details: 
     1. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     3. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastucture 

 h3. https://trello.com/c/OYrxUA2e/1857-wip-lflores-testing-2023-10-09-2254-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-lflores-testing-2023-10-09-2254-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58476 
     2. https://tracker.ceph.com/issues/61578 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/63066 
     5. https://tracker.ceph.com/issues/49287 

 Details: 
     1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastucture 
     4. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 

 h3. https://trello.com/c/25ShNauq/1855-wip-yuri6-testing-2023-10-06-0904-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-10-06-0904-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/58476 
     3. https://tracker.ceph.com/issues/62401 
     4. https://tracker.ceph.com/issues/63066 
     5. https://tracker.ceph.com/issues/55141 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. test_non_existent_cluster: cluster does not exist - Ceph - CephFS 
     3. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore 
     4. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 
     5. thrashers/fastread: assertion failure: rollback_info_trimmed_to == head - Ceph - RADOS 

 h3. https://trello.com/c/L3dTGoH1/1854-wip-yuri8-testing-2023-10-05-1127-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-10-05-1127-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/58476 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/63066 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. test_non_existent_cluster: cluster does not exist - Ceph - CephFS 
     3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     4. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS 

 h3. https://trello.com/c/1pCzXDL1/1840-wip-yuri8-testing-2023-09-27-0951-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-09-27-0951-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/61578 
     2. https://tracker.ceph.com/issues/58560 
     3. https://tracker.ceph.com/issues/43863 
     4. https://tracker.ceph.com/issues/49287 
     5. https://tracker.ceph.com/issues/55809 
     6. https://tracker.ceph.com/issues/56000 
     7. https://tracker.ceph.com/issues/58739 

 Details: 
     1. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard 
     2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Ceph - RADOS 
     3. mkdir: cannot create directory ‘/home/ubuntu/cephtest/archive/audit’: File exists - Tools - Teuthology 
     4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     5. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS 
     6. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     7. "Leak_IndirectlyLost" valgrind report on mon.a - Ceph - RADOS 

 h3. https://trello.com/c/EoKQClMJ/1839-wip-yuri6-testing-2023-09-27-0938-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-09-27-0938-quincy 

 Unrelated failures: 
 1. https://tracker.ceph.com/issues/63066 - new tracker 
 2. https://tracker.ceph.com/issues/58560 
 3. https://tracker.ceph.com/issues/61519 
 4. https://tracker.ceph.com/issues/58476 

 Details: 
 1. 7408015\7408027 - application not enabled on pool '.mgr'  
 2. 7408020 - Error: 'codeready-builder-for-rhel-8-x86_64-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories. 
 3. 7408023\7408026\7408028 - rados/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard 
 4. 7408025 - test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 


 h3. https://trello.com/c/8NoM07Bg/1845-wip-yuri4-testing-2023-10-02-0826-quincy-old-wip-yuri4-testing-2023-09-29-0928-quincy 

 https://pulpito.ceph.com/yuriw-2023-10-02_19:19:00-rados-wip-yuri4-testing-2023-10-02-0826-quincy-distro-default-smithi/ 

 Failures, related: 
     1. https://pulpito.ceph.com/yuriw-2023-10-02_19:19:00-rados-wip-yuri4-testing-2023-10-02-0826-quincy-distro-default-smithi/7408689 
        test_noautoscale_flag.sh failure - Related to https://github.com/ceph/ceph/pull/53677 

 *Re-Run of autoscale-flag test After including missed commits:* 
 https://pulpito.ceph.com/yuriw-2023-10-04_20:41:04-rados:singleton:all-wip-yuri4-testing-2023-10-02-0826-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58560 
     2. https://tracker.ceph.com/issues/61786 
     3. https://tracker.ceph.com/issues/58476 
 Details: 
     1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version 
     3. test_non_existent_cluster: cluster does not exist 

 h3. https://trello.com/c/ZjPC9CcN/1820-wip-yuri5-testing-2023-08-08-0807-quincy 

 https://pulpito-ng.ceph.com/?branch=wip-yuri5-testing-2023-08-08-0807-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58560 
     2. https://tracker.ceph.com/issues/61786 
     3. https://tracker.ceph.com/issues/48502 
 Details: 
     1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard 
     3. ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) - Ceph - CephFS 

 h3. https://trello.com/c/w1wxAcJO/1814-wip-yuri8-testing-2023-07-24-0819-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-07-24-0819-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58476 
     2. https://tracker.ceph.com/issues/61786 
     3. https://tracker.ceph.com/issues/49287 
     4. https://tracker.ceph.com/issues/62401 -- new tracker 
     5. https://tracker.ceph.com/issues/58560 
     6. https://tracker.ceph.com/issues/61570 

 Details: 
     1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard 
     3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     4. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore 
     5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     6. pg_autoscaler warns that a pool has too many pgs when it has the exact right amount - Ceph - Mgr 

 h3. https://trello.com/c/UFKABPAT/1817-wip-yuri7-testing-2023-07-27-1336-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-27-1336-quincy 

 Failures, unrelated: 
 1. https://tracker.ceph.com/issues/58560 
 2. https://tracker.ceph.com/issues/61786 
 3. https://tracker.ceph.com/issues/58476 
 4. https://tracker.ceph.com/issues/61897 

 Details: 

 1.test_envlibrados_for_rocksdb.sh failed to subscribe to repo 
 2.test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr – Dashboard 
 3.test_non_existent_cluster: cluster does not exist - Ceph – Orchestrator 
 4.qa: rados:mgr fails with MDS_CLIENTS_LAGGY  


 h3. https://trello.com/c/iQLrAz6r/1791-wip-yuri6-testing-2023-06-22-1005-quincy-old-wip-yuri6-testing-2023-06-22-0827-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2023-06-22-1005-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58560 
     2. https://tracker.ceph.com/issues/61786 -- new tracker 
     3. https://tracker.ceph.com/issues/58476 
     4. https://tracker.ceph.com/issues/54603 
     5. https://tracker.ceph.com/issues/53575 
     6. https://tracker.ceph.com/issues/61787 -- new tracker 

 Details: 
     1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     2. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version - Ceph - Mgr - Dashboard 
     3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     4. Valgrind reports memory "Leak_IndirectlyLost" errors on ceph-mon in "buffer::ptr_node::create_hypercombined". - Ceph - RADOS 
     5. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS 
     6. Command "ceph --cluster ceph osd dump --format=json" times out when killing OSD - Ceph - RADOS 



 h3. https://trello.com/c/ja6hN7bU/1769-wip-yuri5-testing-2023-05-30-0828-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-05-30-0828-quincy 

 Failures, unrelated: 
 1. https://tracker.ceph.com/issues/58476 
 2. https://tracker.ceph.com/issues/59678 
 3. https://tracker.ceph.com/issues/61225 
 4. https://tracker.ceph.com/issues/58475 
 5. https://tracker.ceph.com/issues/61570 -- new tracker 
 6. https://tracker.ceph.com/issues/49287 

 Details: 
 1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
 2. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS 
 3. TestClsRbd.mirror_snapshot failure - Ceph - RBD 
 4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
 5. pg_autoscaler warns that a pool has too many pgs when it has the exact right amount - Ceph - Mgr 
 6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 

 h3. https://trello.com/c/pkxrazzW/1763-wip-yuri3-testing-2023-05-24-1136-quincy-old-wip-yuri3-testing-2023-05-24-0845-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-05-24-1136-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58476 
     2. https://tracker.ceph.com/issues/58585 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/58587 
     5. https://tracker.ceph.com/issues/59599 
     6. https://tracker.ceph.com/issues/58351 
     7. https://tracker.ceph.com/issues/58475 
     8. https://tracker.ceph.com/issues/61457 -- new tracker 

 Details: 
     1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     2. rook: failed to pull kubelet image - Ceph - Orchestrator 
     3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     4. test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist - Ceph - RADOS 
     5. osd: cls_refcount unit test failures during upgrade sequence - Ceph - RADOS 
     6. Module 'devicehealth' has failed: unknown operation - Ceph - Sqlite 
     7. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     8. PgScrubber: shard blocked on an object for too long - Ceph - RADOS 

 h3. https://trello.com/c/llcQOKAa/1755-wip-yuri10-testing-2023-05-18-0815-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-05-18-0815-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/59599 
     2. https://tracker.ceph.com/issues/58585 
     3. https://tracker.ceph.com/issues/58476 
     4. https://tracker.ceph.com/issues/49287 
     5. https://tracker.ceph.com/issues/58475 
     6. https://tracker.ceph.com/issues/59678 

 Details: 
     1. osd: cls_refcount unit test failures during upgrade sequence - Ceph - RGW 
     2. rook: failed to pull kubelet image - Ceph - Orchestrator 
     3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     6. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS 

 h3. https://trello.com/c/P52gGcRz/1723-wip-aclamk-bs-elastic-shared-blob-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-aclamk-bs-elastic-shared-blob-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58475 -- pending Q backport 
     2. https://tracker.ceph.com/issues/58560 
     3. https://tracker.ceph.com/issues/55142 
     4. https://tracker.ceph.com/issues/58476 
     5. https://tracker.ceph.com/issues/58585 
     6. https://tracker.ceph.com/issues/48502 

 Details: 
     1. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     3. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - Cephsqlite 
     4. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     5. rook: failed to pull kubelet image - Ceph - Orchestrator 
     6. ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) - Ceph - CephFS 

 h3. https://pulpito.ceph.com/yuriw-2023-03-24_13:25:17-rados-quincy-release-distro-default-smithi/ 

 Failures: 
     1. https://tracker.ceph.com/issues/58560 
     2. https://tracker.ceph.com/issues/58476 
     3. https://tracker.ceph.com/issues/58475 -- pending Q backport 
     4. https://tracker.ceph.com/issues/49287 
     5. https://tracker.ceph.com/issues/58585 

 Details: 
     1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     2. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     5. rook: failed to pull kubelet image - Ceph - Orchestrator 

 h3. https://trello.com/c/pWVAglAx/1718-wip-yuri3-testing-2023-03-22-1123-quincy-old-wip-yuri3-testing-2023-03-17-1235-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-03-22-1123-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58585 
     2. https://tracker.ceph.com/issues/56000 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/58475 
     5. https://tracker.ceph.com/issues/59080 

 Details: 
     1. rook: failed to pull kubelet image - Ceph - Orchestrator 
     2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     5. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS 

 h3. https://trello.com/c/gZIwMQTv/1714-wip-yuri10-testing-2023-03-13-1318-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-03-13-1318-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58585 
     2. https://tracker.ceph.com/issues/58475 
     3. https://tracker.ceph.com/issues/49287 
     4. https://tracker.ceph.com/issues/56000 
     5. https://tracker.ceph.com/issues/58476 

 Details: 
     1. rook: failed to pull kubelet image - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     5. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 

 h3. https://trello.com/c/h2Ci11or/1711-wip-yuri8-testing-2023-03-10-0833-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-03-10-0833-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58585 
     2. https://tracker.ceph.com/issues/58475 
     3. https://tracker.ceph.com/issues/58476 
     4. https://tracker.ceph.com/issues/58560 

 Details: 
     1. rook: failed to pull kubelet image - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard 
     3. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 

 h3. https://trello.com/c/kZNe5IOq/1708-wip-yuri5-testing-2023-03-09-0941-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-03-09-0941-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58476 
     2. https://tracker.ceph.com/issues/58475 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/58744 
     5. https://tracker.ceph.com/issues/54369 

 Details: 
     1. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator 
     2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr 
     3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure 
     4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS 
     5. mon/test_mon_osdmap_prune.sh: jq .osdmap_first_committed [[ 11 -eq 20 ]] - Ceph - RADOS 

 h3. https://trello.com/c/nuDSMSOR/1703-wip-yuri6-testing-2023-03-07-1336-quincy-old-wip-yuri6-testing-2023-03-06-1200-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-03-07-1336-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58744 
     2. https://tracker.ceph.com/issues/49287 
     3. https://tracker.ceph.com/issues/50042 
     4. https://tracker.ceph.com/issues/58585 

 Details: 
     1. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS 
     2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     3. rados/test.sh: api_watch_notify failures - Ceph - RADOS 
     4. rook: failed to pull kubelet image - Ceph - Orchestrator 

 h3. https://trello.com/c/CnzKAewc/1700-wip-yuri3-testing-2023-03-01-0812-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-03-01-0812-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/49287 
     2. https://tracker.ceph.com/issues/58146 
     3. https://tracker.ceph.com/issues/49961 

 Details: 
     1. failed to write <pid> to cgroup.procs - Ceph - Orchestrator 
     2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     3. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS 

 h3. https://trello.com/c/KUfgqSfy/1694-wip-yuri4-testing-2023-02-22-0817-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-02-22-0817-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58585 
     2. https://tracker.ceph.com/issues/58146 
     3. https://tracker.ceph.com/issues/58837 -- new tracker 
     4. https://tracker.ceph.com/issues/58915 -- new tracker 
     5. https://tracker.ceph.com/issues/54750 

 Details: 
     1. rook: failed to pull kubelet image - Ceph - Orchestrator 
     2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     3. mgr/test_progress.py: test_osd_healthy_recovery fails after timeout - Ceph - RADOS 
     4. map eXX had wrong heartbeat front addr - Ceph - RADOS 
     5. crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Crashed, PeeringState::PeeringMachine>::my_context): abort - Ceph - RADOS 

 h3. https://trello.com/c/Ou37fSaW/1690-wip-yuri5-testing-2023-02-17-1400-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-02-17-1400-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/44587 
     2. https://tracker.ceph.com/issues/58585 
     3. https://tracker.ceph.com/issues/58146 
     4. https://tracker.ceph.com/issues/58744 

 Details: 
     1. failed to write <pid> to cgroup.procs - Ceph - Orchestrator 
     2. rook: failed to pull kubelet image - Ceph - Orchestrator 
     3. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS 

 h3. https://trello.com/c/gP0lPHtn/1687-wip-yuri2-testing-2023-02-09-0842-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-02-09-0842-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58146 
     2. https://tracker.ceph.com/issues/58585 

 Details: 
     1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     2. rook: failed to pull kubelet image - Ceph - Orchestrator 

 h3. https://trello.com/c/HXwVRMzB/1684-wip-yuri3-testing-2023-02-16-0752-quincy-old-wip-yuri3-testing-2023-02-07-0852-quincy-old-wip-yuri3-testing-2023-02-06-1147-quin 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-02-16-0752-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58146 
     2. https://tracker.ceph.com/issues/58585 

 Details: 
     1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     2. rook: failed to pull kubelet image - Ceph - Orchestrator 

 h3. https://trello.com/c/mU9vQKer/1681-wip-yuri-testing-2023-02-06-1155-quincy-old-wip-yuri-testing-2023-02-04-1345-quincy-old-wip-yuri-testing-2023-02-02-0918-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-02-06-1155-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58146 
     2. https://tracker.ceph.com/issues/58046 -- pending quincy backport 
     3. https://tracker.ceph.com/issues/56788 
     4. https://tracker.ceph.com/issues/58739 
     5. https://tracker.ceph.com/issues/58560 
     6. https://tracker.ceph.com/issues/58585 

 Details: 
     1. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     2. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS 
     3. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore 
     4. "Leak_IndirectlyLost" valgrind report on mon.a - Ceph - RADOS 
     5. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph - RADOS 
     6. rook: failed to pull kubelet image - Ceph - Orchestrator 

 h3. https://trello.com/c/6jh0HcBM/1678-wip-yuri7-testing-2023-01-30-1510-quincy-old-wip-yuri7-testing-2023-01-23-1532-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-01-30-1510-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58585 
     2. https://tracker.ceph.com/issues/58146 
     3. https://tracker.ceph.com/issues/58560 
     4. https://tracker.ceph.com/issues/58265 

 Details: 
     1. rook: failed to pull kubelet image - Ceph - Orchestrator 
     2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     3. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph 
     4. TestClsRbd.group_snap_list_max_read failure during upgrade/parallel tests - Ceph - RBD 

 h3. https://trello.com/c/a9Pfks0y/1666-wip-yuri7-testing-2022-12-09-1107-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-12-09-1107-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/58258 
     2. https://tracker.ceph.com/issues/58046 
     3. https://tracker.ceph.com/issues/58265 --new tracker; unrelated to PRs in this batch 
     4. https://tracker.ceph.com/issues/58140 
     5. https://tracker.ceph.com/issues/58097 
     6. https://tracker.ceph.com/issues/56000 
     7. https://tracker.ceph.com/issues/56785 

 Details: 
     1. rook: kubelet fails from connection refused - Ceph - Orchestrator 
     2. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS 
     3. TestClsRbd.group_snap_list_max_read failure - Ceph - RBD 
     4. quay.ceph.io/ceph-ci/ceph: manifest unknown - Ceph - Orchestrator 
     5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Ceph - RADOS 
     6. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     7. crash: void OSDShard::register_and_wake_split_child(PG*): assert(!slot->waiting_for_split.empty()) - Ceph - RADOS 

 h3. https://trello.com/c/envBR2ox/1654-wip-yuri5-testing-2022-11-18-1554-quincy-old-wip-yuri5-testing-2022-10-19-1308-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-11-18-1554-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57311 
     2. https://tracker.ceph.com/issues/58146 
     3. https://tracker.ceph.com/issues/58046 
     4. https://tracker.ceph.com/issues/58097 
     5. https://tracker.ceph.com/issues/56000 
     6. https://tracker.ceph.com/issues/52321 
     7. https://tracker.ceph.com/issues/57754 

 Details: 
     1. rook: ensure CRDs are installed first - Ceph - Orchestrator 
     2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator 
     3. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command 
     4. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure 
     5. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     7. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS 

 h3. https://trello.com/c/iEU3xOhe/1638-wip-yuri6-testing-2022-09-23-1008-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-09-23-1008-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57386 
     2. https://tracker.ceph.com/issues/57311 
     3. https://tracker.ceph.com/issues/56951 
     4. https://tracker.ceph.com/issues/57165 -- pending Quincy backport 

 Details: 
     1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard 
     2. rook: ensure CRDs are installed first - Ceph - Orchestrator 
     3. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 
     4. expected valgrind issues and found none - Ceph - RADOS 

 h3. https://trello.com/c/CWbOkqWR/1626-wip-yuri10-testing-2022-09-04-0811-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-09-04-0811-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/56951 
     2. https://tracker.ceph.com/issues/57165 
     3. https://tracker.ceph.com/issues/57368 
     4. https://tracker.ceph.com/issues/57290 -- pending Quincy backport 
     5. https://tracker.ceph.com/issues/57386 
     6. https://tracker.ceph.com/issues/52124 -- pending Quincy backport 
     7. https://tracker.ceph.com/issues/49524 

 Details: 
     1. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 
     2. expected valgrind issues and found none - Ceph - RADOS 
     3. The CustomResourceDefinition "installations.operator.tigera.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes - Ceph - RADOS 
     4. orch/cephadm: task/test_cephadm failure due to: ERROR: A cluster with the same fsid '00000000-0000-0000-0000-0000deadbeef' already exists. - Ceph - Orchestrator 
     5. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard 
     6. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     7. ceph_test_rados_delete_pools_parallel didn't start - Ceph - RADOS 

 h3. https://trello.com/c/s9pGC2JL/1630-wip-yuri6-testing-2022-09-08-0859-quincy-old-wip-yuri6-testing-2022-09-06-1353-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-09-06-1353-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57165 -- pending Quincy backport 
     2. https://tracker.ceph.com/issues/57290 -- pending Quincy backport 
     3. https://tracker.ceph.com/issues/57269 
     4. https://tracker.ceph.com/issues/49287 
     5. https://tracker.ceph.com/issues/56951 

 Details: 
     1. expected valgrind issues and found none - Ceph - RADOS 
     2. orch/cephadm: task/test_cephadm failure due to: ERROR: A cluster with the same fsid '00000000-0000-0000-0000-0000deadbeef' already exists. - Ceph - Orchestrator 
     3. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator 
     4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     5. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 

 h3. https://trello.com/c/ZiFOEFfI/1614-wip-yuri-testing-2022-08-23-1120-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-08-23-1120-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57303 -- Pending Quincy backport 
     2. https://tracker.ceph.com/issues/57269 
     3. https://tracker.ceph.com/issues/57165 -- Fix under review 
     4. https://tracker.ceph.com/issues/57270 
     5. https://tracker.ceph.com/issues/57311 
     6. https://tracker.ceph.com/issues/49287 

 Details: 
     1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7    - Ceph - Orchestrator 
     2. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator 
     3. expected valgrind issues and found none - Ceph - RADOS 
     4. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator 
     5. rook: ensure CRDs are installed first - Ceph - Orchestrator 
     6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 

 h3. https://trello.com/c/cU355Dso/1617-wip-yuri3-testing-2022-08-24-0820-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-08-24-0820-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57303 -- Pending Quincy backport 
     2. https://tracker.ceph.com/issues/57269 
     3. https://tracker.ceph.com/issues/57165 -- Fix under review 
     4. https://tracker.ceph.com/issues/57270 -- Pending Quincy backport 
     5. https://tracker.ceph.com/issues/57311 

 Details: 
     1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7    - Ceph - Orchestrator 
     2. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator 
     3. expected valgrind issues and found none - Ceph - RADOS 
     4. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator 
     5. rook: ensure CRDs are installed first - Ceph - Orchestrator 

 h3. https://trello.com/c/4ng3KVzi/1609-wip-yuri7-testing-2022-08-17-0943-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-08-17-0943-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/57303 -- new Tracker; unrelated to PRs in this run 
     2. https://tracker.ceph.com/issues/52321 
     3. https://tracker.ceph.com/issues/57303 
     4. https://tracker.ceph.com/issues/56951 
     5. https://tracker.ceph.com/issues/57270 -- Pending Quincy backport 
     6. https://tracker.ceph.com/issues/57165 -- Fix under review; pending Quincy backport 
     7. https://tracker.ceph.com/issues/49287 

 Details: 
     1. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7    - Ceph - Orchestrator 
     2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook 
     3. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard 
     4. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 
     5. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator 
     6. expected valgrind issues and found none - Ceph - RADOS 
     7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 

 h3. https://trello.com/c/nApVKz07/1598-wip-yuri8-testing-2022-08-03-1028-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-08-03-1028-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/55809 
     2. https://tracker.ceph.com/issues/52321 
     3. https://tracker.ceph.com/issues/55854 
     4. https://tracker.ceph.com/issues/55897 
     5. https://tracker.ceph.com/issues/56951 

 Details: 
     1. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS 
     2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook 
     3. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr 
     4. test_nfs: update of export's access type should not trigger NFS service restart - Ceph - CephFS 
     5. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 

 h3. https://trello.com/c/vJmeRbjP/1592-wip-yuri7-testing-2022-07-27-0808-quincy 

 https://pulpito.ceph.com/?branch=wip-yuri7-testing-2022-07-27-0808-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/56652 
     3. https://tracker.ceph.com/issues/52124 
     4. https://tracker.ceph.com/issues/45721 
     5. https://tracker.ceph.com/issues/55001 
     6. https://tracker.ceph.com/issues/56951 -- new Tracker; looks unrelated to the PRs in this run. 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook 
     2. cephadm/test_repos.sh: rllib.error.HTTPError: HTTP Error 504: Gateway Timeout - Ceph - Orchestrator 
     3. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     4. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS 
     5. rados/test.sh: Early exit right after LibRados global tests complete - Ceph - RADOS 
     6. rook/smoke: Updating cephclusters/rook-ceph is forbidden - Ceph - Orchestrator 

 h3. quincy v17.2.1 

 https://tracker.ceph.com/issues/55974#note-1 

 Failures: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/56000 
     3. https://tracker.ceph.com/issues/53685 
     4. https://tracker.ceph.com/issues/52124 
     5. https://tracker.ceph.com/issues/55854 
     6. https://tracker.ceph.com/issues/53789 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 
     3. Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed. - Ceph - RADOS 
     4. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     5. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr 
     6. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS 

 h3. https://trello.com/c/pR7udWVA/1559-wip-yuri6-testing-2022-06-16-0651-quincy 

 https://pulpito.ceph.com/yuriw-2022-06-16_16:41:04-rados-wip-yuri6-testing-2022-06-16-0651-quincy-distro-default-smithi 
 https://pulpito.ceph.com/yuriw-2022-06-17_13:54:27-rados-wip-yuri6-testing-2022-06-16-0651-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/55808 
     2. https://tracker.ceph.com/issues/52321 
     3. https://tracker.ceph.com/issues/53575 
     4. https://tracker.ceph.com/issues/55741 
     5. https://tracker.ceph.com/issues/53294 
     6. https://tracker.ceph.com/issues/55854 
     7. https://tracker.ceph.com/issues/55986 

 Details: 
     1. task/test_nfs: KeyError: 'events' - Ceph - Orchestrator 
     2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     3. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS 
     4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 
     5. rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS 
     6. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr 
     7. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator 

 h3. quincy-release 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-11_02:24:12-rados-quincy-release-distro-default-smithi/ 

 Failures: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/53685 
     3. https://tracker.ceph.com/issues/55741 
     4. https://tracker.ceph.com/issues/56000 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed. - Ceph - RADOS 
     3. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 
     4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator 

 h3. https://trello.com/c/oobgk2KP/1553-wip-yuri4-testing-2022-06-09-1510-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-06-09-1510-quincy 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/52124 
     3. https://tracker.ceph.com/issues/45721 
     4. https://tracker.ceph.com/issues/55741 
     5. https://tracker.ceph.com/issues/55001 
     6. https://tracker.ceph.com/issues/55986 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     3. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS 
     4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 
     5. rados/test.sh: Early exit right after LibRados global tests complete - Ceph - RADOS 
     6. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator 

 h3. https://trello.com/c/h3NXlLnF/1543-wip-yuri5-testing-2022-06-02-0825-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:24:42-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-03_20:44:47-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52124 
     2. https://tracker.ceph.com/issues/52321 
     3. https://tracker.ceph.com/issues/46877 
     4. https://tracker.ceph.com/issues/55741 
     5. https://tracker.ceph.com/issues/55897 --> opened a new Tracker for this 
     6. https://tracker.ceph.com/issues/54360 

 Details: 
     1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     3. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none : Seen on octopus - Ceph - RADOS 
     4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 
     5. test_nfs: update of export's access type should not trigger NFS service restart - Ceph - CephFS 
     6. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator 

 h3. https://trello.com/c/JlmHBRyS/1540-wip-yuri3-testing-2022-06-01-1035-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_23:21:01-rados-wip-yuri3-testing-2022-06-01-1035-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_14:59:52-rados-wip-yuri3-testing-2022-06-01-1035-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/55741 
     3. https://tracker.ceph.com/issues/55838 --> opened a new Tracker for this; it is unrelated to the PR tested in this run. 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator 
     2. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 
     3. cephadm/osds: Exception with "test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm" - Ceph - Orchestrator 

 h3. https://trello.com/c/W8AlWvBV/1539-wip-yuri-testing-2022-06-02-0810-quincy-old-wip-yuri-testing-2022-05-31-1642-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_02:28:14-rados-wip-yuri-testing-2022-05-31-1642-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-01_14:00:31-rados-wip-yuri-testing-2022-05-31-1642-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:23:28-rados-wip-yuri-testing-2022-06-02-0810-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/49287 
     3. https://tracker.ceph.com/issues/52652 
     4. https://tracker.ceph.com/issues/53575 
     5. https://tracker.ceph.com/issues/55741 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator 
     2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     3. ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr 
     4. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS 
     5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard 

 h3. https://trello.com/c/MQjZXlbD/1535-wip-yuri2-testing-2022-05-26-1430-quincy-old-wip-yuri2-testing-2022-05-25-1323-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-05-26_23:23:48-rados-wip-yuri2-testing-2022-05-26-1430-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_13:37:17-rados-wip-yuri2-testing-2022-05-26-1430-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/45721 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS 

 h3. https://trello.com/c/KaPapKB1/1533-wip-yuri4-testing-2022-05-19-0831-quincy-old-wip-yuri4-testing-2022-05-18-1410-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-05-19_18:50:25-rados-wip-yuri4-testing-2022-05-19-0831-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/52657 
     3. https://tracker.ceph.com/issues/51076 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS 
     3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS 

 h3. https://trello.com/c/7XXyAioY/1527-wip-yuri-testing-2022-05-10-1027-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-05-12_22:14:26-rados-wip-yuri-testing-2022-05-10-1027-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-05-14_14:31:51-rados-wip-yuri-testing-2022-05-10-1027-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     1. https://tracker.ceph.com/issues/52321 
     2. https://tracker.ceph.com/issues/51076 

 Details: 
     1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator 
     2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS 

 h3. https://trello.com/c/iwEbO83e/1518-wip-yuri-testing-2022-04-27-1456-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-28_14:23:18-rados-wip-yuri-testing-2022-04-27-1456-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-29_20:56:26-rados-wip-yuri-testing-2022-04-27-1456-quincy-distro-default-smithi/ 

 Failures, unrelated: 

     https://tracker.ceph.com/issues/52124 

     https://tracker.ceph.com/issues/55559 --> had to open a new one for this. It failed in the first run, but passed in the rerun. Seems unrelated. 

 Details: 
     1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     2. osd-backfill-stats.sh fails in TEST_backfill_ec_prim_out - Ceph - RADOS 

 h3. https://trello.com/c/GSpBtbRm/1512-wip-yuri3-testing-2022-04-22-0534-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-22_21:06:04-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-25_14:14:44-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/lflores-2022-04-26_15:57:44-rados-wip-yuri3-testing-2022-04-22-0534-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/54509 
     https://tracker.ceph.com/issues/51076 
     https://tracker.ceph.com/issues/44595 
     https://tracker.ceph.com/issues/54329 
     https://tracker.ceph.com/issues/55443 
     https://tracker.ceph.com/issues/52657 

 Details: 
     1. FAILED ceph_assert due to issue manifest API to the original object - Ceph - RADOS 
     2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS 
     3. cache tiering: Error: oid 48 copy_from 493 returned error code -2 - Ceph - RADOS 
     4. test_nfs.py: NFS Ganesha cluster deployment timeout - Ceph 
     5. "SELinux denials found.." in rados run - Infrastructure 
     6. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS 

 h3. https://trello.com/c/ZzBTuiz8/1508-wip-yuri-testing-2022-04-13-0703-quincy 

 http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-04-13-0703-quincy 

 Failures in the initial run were due to infrastructure issues, and therefore unrelated. 
 All jobs were green in the final re-run. 

 h3. https://trello.com/c/kb4IFQLu/1506-wip-yuri11-testing-2022-04-11-1138-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-11_21:21:54-rados-wip-yuri11-testing-2022-04-11-1138-quincy-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-12_15:27:11-rados-wip-yuri11-testing-2022-04-11-1138-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/52124 

 Details: 
     1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 

 h3. https://trello.com/c/YCs20uZ7/1503-wip-yuri4-testing-2022-04-05-1720-pacific 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-04-06_14:02:46-rados-wip-yuri4-testing-2022-04-05-1720-pacific-distro-default-smithi/ 
 http://pulpito.front.sepia.ceph.com/lflores-2022-04-07_18:45:23-rados-wip-yuri4-testing-2022-04-05-1720-pacific-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/53501 
     https://tracker.ceph.com/issues/49287 
     https://tracker.ceph.com/issues/54071 
     https://tracker.ceph.com/issues/54086 

 There were also some selinux denials in several cephadm tests. 

 Details: 
     1. Exception when running 'rook' task. - Ceph - Orchestrator 
     2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 
     3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator 
     4. Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology 

 h3. https://trello.com/c/wMFylrET/1499-wip-yuri4-testing-2022-03-31-1158-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-03-31_21:46:00-rados-wip-yuri4-testing-2022-03-31-1158-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/52124 
     https://tracker.ceph.com/issues/54029 
     https://tracker.ceph.com/issues/49287 

 Details: 
     1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     2. orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test failing - Ceph - Orchestrator 
     3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator 

 h3. https://trello.com/c/zDTqMLdh/1486-wip-yuri7-testing-2022-03-23-1332-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-03-24_01:58:56-rados-wip-yuri7-testing-2022-03-23-1332-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/52124 
     https://tracker.ceph.com/issues/53855 
     https://tracker.ceph.com/issues/54029 
     https://tracker.ceph.com/issues/50042 

 h3. https://trello.com/c/E9Caje20/1465-wip-yuri-testing-2022-02-28-0823-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-02-28_21:23:00-rados-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/54029 
     https://tracker.ceph.com/issues/54439 
     https://tracker.ceph.com/issues/50280 

 Details: 
     1. orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test failing - Ceph - Orchestrator 
     2. LibRadosWatchNotify.WatchNotify2Multi fails - Ceph - RADOS 
     3. cephadm: RuntimeError: uid/gid not found - Ceph 

 h3. https://trello.com/c/3G1ufRuW/1458-wip-yuri11-testing-2022-02-21-0831-quincy 

 http://pulpito.front.sepia.ceph.com/yuriw-2022-02-21_18:20:15-rados-wip-yuri11-testing-2022-02-21-0831-quincy-distro-default-smithi/ 

 Failures, unrelated: 
     https://tracker.ceph.com/issues/52124 
     https://tracker.ceph.com/issues/50280 
     https://tracker.ceph.com/issues/54337 

 Details: 
     1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS 
     2. cephadm: RuntimeError: uid/gid not found - Ceph 
     3. Selinux denials seen on fs/rados teuthology runs - Infrastructure