Project

General

Profile

Actions

PACIFIC

Summaries are ordered latest --> oldest.

https://trello.com/c/3cEnuGqr/1952-wip-yuri10-testing-2024-02-08-0854-pacific

https://pulpito.ceph.com/?branch=wip-yuri10-testing-2024-02-08-0854-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/58658
2. https://tracker.ceph.com/issues/63577
3. https://tracker.ceph.com/issues/63887
4. https://tracker.ceph.com/issues/54071
5. https://tracker.ceph.com/issues/64126
6. https://tracker.ceph.com/issues/64451 -- new tracker
7. https://tracker.ceph.com/issues/61193
8. https://tracker.ceph.com/issues/64452
9. https://tracker.ceph.com/issues/64454 -- new tracker
10. https://tracker.ceph.com/issues/64455 -- new tracker
11. https://tracker.ceph.com/issues/57303

Details:
1. Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
2. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
3. Starting alertmanager fails from missing container - Ceph - Orchestrator
4. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
5. ceph-iscsi build was retriggered and now missing package_manager_version attribute - Ceph
6. Prometheus: unable to retrieve auth token: invalid username/password - Ceph - Orchestrator
7. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - RADOS
8. Teuthology runs into "TypeError: expected string or bytes-like object" during log scraping - Tools - Teuthology
9. pacific: rados/cephadm/mgr-nfs-upgrade: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log - Ceph - Orchestrator
10. pacific: task/test_orch_cli: Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log - Ceph - Orchestrator
11. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator

https://trello.com/c/ERqmvaZu/1947-wip-yuri10-testing-2024-02-02-1149-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/43887
2. https://tracker.ceph.com/issues/62225
3. https://tracker.ceph.com/issues/49287
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/64126
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/61602
8. https://tracker.ceph.com/issues/58915
9. https://tracker.ceph.com/issues/52470
10. https://tracker.ceph.com/issues/53154
11. https://tracker.ceph.com/issues/64343 -- new tracker
12. https://tracker.ceph.com/issues/64344 -- new tracker

Details:
1. ceph_test_rados_delete_pools_parallel failure - Ceph - RADOS
2. pacific upgrade test fails on 'ceph versions | jq e' command - Ceph - Orchestrator
3. podman: setting cgroup config for procHooks process caused: Unit libpod
$hash.scope not found - Ceph - Orchestrator
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. ceph-iscsi build was retriggered and now missing package_manager_version attribute - Infrastructure
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. pacific: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - RADOS
8. map eXX had wrong heartbeat addr - Ceph - RADOS
9. [ FAILED ] LibRadosAio.PoolQuotaPP - rados/test.sh timeout - Ceph - RADOS
10. t8y: cephadm: error: unrecognized arguments: --keep-logs - Ceph - Orchestrator
11. Expected warnings that need to be whitelisted cause rados/cephadm tests to fail - Ceph - RADOS
12. rados/cephadm/dashboard: test that is expects a HOST_MAINTENANCE_MODE scenario fails due to warning in cluster log - Ceph - Mgr - Dashboard

https://trello.com/c/N8lquGmt/1946-wip-yuri2-testing-2024-02-01-0939-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2024-02-01-0939-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/58658
2. https://tracker.ceph.com/issues/64280
3. https://tracker.ceph.com/issues/63577
4. https://tracker.ceph.com/issues/62225
5. https://tracker.ceph.com/issues/64126
6. https://tracker.ceph.com/issues/61602
7. https://tracker.ceph.com/issues/63887
8. https://tracker.ceph.com/issues/44884
9. https://tracker.ceph.com/issues/54071
10. https://tracker.ceph.com/issues/55787
11. https://tracker.ceph.com/issues/46318

Details:
1. mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
2. mgr-nfs-upgrade test times out from failed cephadm daemons - Ceph - Orchestrator
3. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
4. pacific upgrade test fails on 'ceph versions | jq -e' command - Ceph - Orchestrator
5. ceph-iscsi build was retriggered and now missing package_manager_version attribute - Infrastructure
6. pacific: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - RADOS
7. Starting alertmanager fails from missing container - Ceph - Orchestrator
8. mon: weight-set create may return on uncomitted state - Ceph - RADOS
9. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
10. mon/crush_ops.sh: Error ENOENT: item osd.7 does not exist - Ceph - RADOS
11. mon_recovery: quorum_status times out - Ceph - RADOS

pacific-release, 16.2.15

https://tracker.ceph.com/issues/64151#note-1

Failures:
1. https://tracker.ceph.com/issues/62225
2. https://tracker.ceph.com/issues/64278
3. https://tracker.ceph.com/issues/58659
4. https://tracker.ceph.com/issues/58658
5. https://tracker.ceph.com/issues/64280
6. https://tracker.ceph.com/issues/63577
7. https://tracker.ceph.com/issues/63894
8. https://tracker.ceph.com/issues/64126
9. https://tracker.ceph.com/issues/63887
10. https://tracker.ceph.com/issues/61602
11. https://tracker.ceph.com/issues/54071
12. https://tracker.ceph.com/issues/57386
13. https://tracker.ceph.com/issues/64281
14. https://tracker.ceph.com/issues/49287

Details:
1. pacific upgrade test fails on 'ceph versions | jq e' command - Ceph - RADOS
2. Unable to update caps for client.iscsi.iscsi.a - Ceph - Orchestrator
3. mds_upgrade_sequence: failure when deploying node-exporter - Ceph - Orchestrator
4. mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
5. mgr-nfs-upgrade test times out from failed cephadm daemons - Ceph - Orchestrator
6. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
7. qa: cephadm failed with an error code 1, alertmanager container not found. - Ceph - Orchestrator
8. ceph-iscsi build was retriggered and now missing package_manager_version attribute - Ceph
9. Starting alertmanager fails from missing container - Ceph - Orchestrator
10. pacific: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
11. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
12. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
13. Failed to download key at http://download.ceph.com/keys/autobuild.asc: Request failed: <urlopen error [Errno 101] Network is unreachable> - Infrastructure
14. podman: setting cgroup config for procHooks process caused: Unit libpod
$hash.scope not found - Ceph - Orchestrator

https://trello.com/c/aaaE5rGb/1938-wip-lflores-testing-2

Failures, unrelated:
1. https://tracker.ceph.com/issues/49287
2. https://tracker.ceph.com/issues/62225
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/63577
5. https://tracker.ceph.com/issues/55141
6. https://tracker.ceph.com/issues/63894
7. https://tracker.ceph.com/issues/64126
8. https://tracker.ceph.com/issues/59192
9. https://tracker.ceph.com/issues/63887
10. https://tracker.ceph.com/issues/58659
11. https://tracker.ceph.com/issues/57829
12. https://tracker.ceph.com/issues/53789

Details:
1. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
2. pacific upgrade test fails on 'ceph versions | jq -e' command - Ceph - RADOS
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
5. thrashers/fastread: assertion failure: rollback_info_trimmed_to == head - Ceph - RADOS
6. qa: cephadm failed with an error code 1, alertmanager container not found. - Ceph - Orchestrator
7. ceph-iscsi build was retriggered and now missing package_manager_version attribute - Ceph
8. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
9. Starting alertmanager fails from missing container - Ceph - Orchestrator
10. mds_upgrade_sequence: failure when deploying node-exporter - Ceph - Orchestrator
11. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
12. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS

https://trello.com/c/5smPyFbb/1933-wip-yuri4-testing-2024-01-18-1257-pacific-no-needs-qa

Failures, unrelated:
1. https://tracker.ceph.com/issues/54071
2. https://tracker.ceph.com/issues/63577
3. https://tracker.ceph.com/issues/58658
4. https://tracker.ceph.com/issues/58659
5. https://tracker.ceph.com/issues/62225
6. https://tracker.ceph.com/issues/58099
7. https://tracker.ceph.com/issues/63887
8. https://tracker.ceph.com/issues/59335
9. https://tracker.ceph.com/issues/62508
10. https://tracker.ceph.com/issues/57829
11. https://tracker.ceph.com/issues/61921
12. https://tracker.ceph.com/issues/62401
13. https://tracker.ceph.com/issues/53827

Details:
1. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Mgr - Dashboard
2. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
3. mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
4. mds_upgrade_sequence: failure when deploying node-exporter - Ceph - Orchestrator
5. pacific upgrade test fails on 'ceph versions | jq -e' command - Ceph - RADOS
6. ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixPreferDeferred/2 fails - Ceph - Bluestore
7. Starting alertmanager fails from missing container - Ceph - Orchestrator
8. Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
9. qa: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log - Ceph - RADOS
10. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
11. centos 8 builds fail because package ceph-iscsi-3.6-1.el8.noarch.rpm is not signed - Infrastructure
12. ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value - Ceph - Bluestore
13. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Orchestrator

https://trello.com/c/Puff5ZQG/1931-wip-yuri10-testing-2024-01-17-0759-pacific

https://pulpito.ceph.com/yuriw-2024-01-19_16:17:11-rados-wip-yuri10-testing-2024-01-17-0759-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/62225
2. https://tracker.ceph.com/issues/63066
3. https://tracker.ceph.com/issues/57247

Details:
1. ['7523683', '7523669'] - pacific upgrade failed
2. ['7523678', '7523658'] - rados/objectstore - application not enabled on pool '.mgr'
3. ['7523665'] - [cephadm] Error response from daemon: No such container

https://trello.com/c/E4m6ienf/1925-wip-yuri5-testing-2024-01-11-1300-pacific-old-wip-yuri5-testing-2024-01-10-1125-pacific-old-wip-yuri5-testing-2024-01-09-1616-pa

https://pulpito.ceph.com/?branch=wip-yuri5-testing-2024-01-11-1300-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/64105 <------ New tracker
2. https://tracker.ceph.com/issues/63894
3. https://tracker.ceph.com/issues/53723
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/58674

Details:
1. upgrade tests fails on 'ceph versions | jq -e' command - Ceph
2. qa: cephadm failed with an error code 1, alertmanager container not found.
3. Cephadm agent fails to report and causes a health timeout
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
6. mds_upgrade_sequence:teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds - cephfs

https://trello.com/c/t8QtpF0R/1927-wip-yuri11-testing-2024-01-10-1124-pacific

https://pulpito.ceph.com/?branch=wip-yuri11-testing-2024-01-10-1124-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/58658
2. https://tracker.ceph.com/issues/62714
3. https://tracker.ceph.com/issues/53693
4. https://tracker.ceph.com/issues/62508
5. https://tracker.ceph.com/issues/63887
6. https://tracker.ceph.com/issues/63748
7. https://tracker.ceph.com/issues/45702
8. https://tracker.ceph.com/issues/63577
9. https://tracker.ceph.com/issues/54071
10. https://tracker.ceph.com/issues/56788
11. https://tracker.ceph.com/issues/46877

Details:
1. mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
2. Bad file descriptor when stopping Ceph iscsi - Ceph - Orchestrator
3. ceph orch upgrade start is getting stuck in gibba cluster - Ceph - Orchestrator
4. qa: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log - Ceph
5. Starting alertmanager fails from missing container - Ceph - Orchestrator
6. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
7. PGLog::read_log_and_missing: ceph_assert(miter missing.get_items().end() || (miter->second.need i->version && miter->second.have == eversion_t())) - Ceph - RADOS
8. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
9. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
10. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore
11. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none - Ceph - RADOS

https://trello.com/c/lvJY7B0T/1920-wip-yuri-testing-2024-01-03-0851-pacific

https://pulpito.ceph.com/?branch=wip-yuri-testing-2024-01-03-0851-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/63887
2. https://tracker.ceph.com/issues/63748
3. https://tracker.ceph.com/issues/62225
4. https://tracker.ceph.com/issues/63577
5. https://tracker.ceph.com/issues/58659
6. https://tracker.ceph.com/issues/57386
7. https://tracker.ceph.com/issues/63066
8. https://tracker.ceph.com/issues/61193

Details:
1. Starting alertmanager fails from missing container - Ceph - Orchestrator
2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
3. pacific upgrade test fails when upgrading OSDs due to degraded pgs - Ceph - RADOS
4. cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit - Ceph - Orchestrator
5. mds_upgrade_sequence: failure when deploying node-exporter - Ceph - Orchestrator
6. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
7. rados/objectstore - application not enabled on pool '.mgr' - Ceph - RADOS
8. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore

https://trello.com/c/VTd0dfag/1914-wip-yuri7-testing-2023-12-27-1008-pacific-old-wip-yuri7-testing-2023-12-20-0808-pacific

https://pulpito.ceph.com/yuriw-2023-12-27_21:01:10-rados-wip-yuri7-testing-2023-12-27-1008-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/62225
2. https://tracker.ceph.com/issues/63748
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/63786
6. https://tracker.ceph.com/issues/62482

Details:
1. ['7502422', '7502371', '7502338', '7502351'] - pacific upgrade test fails when upgrading OSDs due to degraded pgs
2. ['7502346', '7502403'] - qa/workunits/post-file.sh: Couldn't create directory
3. ['7502367'] - rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
4. ['7502386', '7502334'] - cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. ['7502356'] - rados_cls_all: TestCls2PCQueue.MultiProducer hangs
6. ['7502419'] - cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

https://trello.com/c/tMPfCeZm/1909-wip-yuri5-testing-2023-12-15-0747-pacific-old-wip-yuri5-testing-2023-12-14-1107-pacific

https://pulpito.ceph.com/yuriw-2023-12-26_16:04:49-rados-wip-yuri5-testing-2023-12-15-0747-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/63750
3. https://tracker.ceph.com/issues/62225
4. https://tracker.ceph.com/issues/62482
5. https://tracker.ceph.com/issues/63894
6. https://tracker.ceph.com/issues/57247

Details:
1. ['7501355', '7501364', '7501380'] - cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. ['7501348', '7501373']- qa/workunits/post-file.sh: Couldn't create directory: No such file or directory - Infrastructure
3. ['7501361', '7501390'] - pacific upgrade test fails when upgrading OSDs due to degraded pgs
4. ['7501384', '7501353', '7501338'] - cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
5. ['7501359'] - qa: cephadm failed with an error code 1, alertmanager container not found.
6. ['7501358', '7501388'] - [cephadm] Error response from daemon: No such container

https://trello.com/c/05fJNcz0/1900-wip-yuri6-testing-2023-12-05-0753-pacific

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-12-05-0753-pacific

Failures, unrelated:
  1. https://tracker.ceph.com/issues/61732
  2. https://tracker.ceph.com/issues/62482
  3. https://tracker.ceph.com/issues/63750 -- new tracker
  4. https://tracker.ceph.com/issues/59193
  5. https://tracker.ceph.com/issues/63531
  6. https://tracker.ceph.com/issues/54071
  7. https://tracker.ceph.com/issues/63720
Details:
  1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
  2. qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" - Ceph - RADOS
  3. qa/workunits/post-file.sh: Couldn't create directory: No such file or directory - Infrastructure
  4. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
  5. Error authenticating with smithiXXX.front.sepia.ceph.com: SSHException('No existing session') (No SSH private key found!) - Infrastructure
  6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
  7. cephadm: Cannot set values for --daemon-types, --services or --hosts when upgrade already in progress. - Ceph - Orchestrator

https://trello.com/c/uQvHx3ln/1895-wip-yuri-testing-2023-11-27-1028-pacific

https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-11-27-1028-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/54071
3. https://tracker.ceph.com/issues/62482
4. https://tracker.ceph.com/issues/63720 -- new tracker
5. https://tracker.ceph.com/issues/50222
6. https://tracker.ceph.com/issues/59335
7. https://tracker.ceph.com/issues/46318

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
3. qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" - Ceph - RADOS
4. cephadm: Cannot set values for --daemon-types, --services or --hosts when upgrade already in progress. - Ceph - Orchestrator
5. osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
6. Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
7. mon_recovery: quorum_status times out - Ceph - RADOS

https://trello.com/c/H3G55fyc/1888-wip-yuri2-testing-2023-11-13-0820-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-11-13-0820-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/56028
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/57386
4. https://tracker.ceph.com/issues/63605 -- new tracker
5. https://tracker.ceph.com/issues/62714
6. https://tracker.ceph.com/issues/61732
7. https://tracker.ceph.com/issues/62482
8. https://tracker.ceph.com/issues/62225
9. https://tracker.ceph.com/issues/57386
10. https://tracker.ceph.com/issues/54071

Details:
1. thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) in src/test/osd/RadosModel.h - Ceph - RADOS
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
4. Failed to extract uid/gid for path /etc/prometheus - Ceph - Orchestrator
5. Bad file descriptor when stopping Ceph iscsi - Infrastructure
6. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
7. qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" - Ceph - RADOS
8. pacific upgrade test fails when upgrading OSDs due to degraded pgs - Ceph - RADOS
9. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
10. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator

https://trello.com/c/IRMr0OnH/1884-wip-yuri3-testing-2023-11-07-0801-pacific

https://pulpito.ceph.com/yuriw-2023-11-08_16:14:24-rados-wip-yuri3-testing-2023-11-07-0801-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-11-07_19:34:21-rados-wip-yuri3-testing-2023-11-07-0801-pacific-distro-default-smithi/

Failures, unrelated:
  1. https://tracker.ceph.com/issues/62870
  2. https://tracker.ceph.com/issues/63192
  3. https://tracker.ceph.com/issues/62557
  4. https://tracker.ceph.com/issues/57829
  5. https://tracker.ceph.com/issues/58560
  6. https://tracker.ceph.com/issues/59193
  7. https://tracker.ceph.com/issues/54071
  8. https://tracker.ceph.com/issues/61586
  9. https://tracker.ceph.com/issues/49287
  10. https://tracker.ceph.com/issues/62535
  11. https://tracker.ceph.com/issues/62225
Details:
  1. test_nfs task fails due to no orch backend set
  2. Fix the POOL_APP_NOT_ENABLED warning to only be generated after the new pool has no application for some amount of time
  3. rados: Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning
  4. pacific: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did
  5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo
  6. "Failed to fetch package version from https://shaman.ceph.com/api/search ..."
  7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
  8. fetch_binaries_for_coredumps() attempts to run "which" in the entire command with arguments
  9. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found added
  10. cephadm: wait for healthy state times out because cephadm agent is down
  11. pacific upgrade test fails when upgrading OSDs due to degraded pgs

https://trello.com/c/XxBiaEyT/1874-wip-yuri5-testing-2023-10-24-0737-pacific-old-wip-yuri5-testing-2023-10-23-1158-pacific

https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-10-24-0737-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/61193
3. https://tracker.ceph.com/issues/62482
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/62225
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/63105
8. https://tracker.ceph.com/issues/57386
9. https://tracker.ceph.com/issues/63408

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore
3. qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)"
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. pacific upgrade test fails when upgrading OSDs due to degraded pgs - Ceph - RADOS
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. mds: report clients laggy due laggy OSDs only after checking any OSD is laggy - Ceph - CephFS
8. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
9. libcephsqlite fails with coredump - Ceph - Cephsqlite

https://trello.com/c/eCCS9GmO/1877-wip-yuri3-testing-2023-10-25-0858-pacific

https://pulpito.ceph.com/yuriw-2023-10-27_15:32:55-rados-wip-yuri3-testing-2023-10-25-0858-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/62482
3. https://tracker.ceph.com/issues/57386
4. https://tracker.ceph.com/issues/59193 (Infrastructure Failure)
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/63269

Details:
1. test_cluster_info fails from "No daemons reported"
2. cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
3. ephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did
4. Failed to fetch package version from ...
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
6. mds: report clients laggy due laggy OSDs only after checking any OSD is laggy

Individual testing for https://github.com/ceph/ceph/pull/51045

https://pulpito.ceph.com/yuriw-2023-09-01_19:14:47-rados-wip-batrick-testing-20230831.124848-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/44587
2. https://tracker.ceph.com/issues/62714
3. https://tracker.ceph.com/issues/61732
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/62482
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/58560
8. https://tracker.ceph.com/issues/53789

Details:
1. failed to write <pid> to cgroup.procs: - Ceph - Orchestrator
2. Bad file descriptor when stopping Ceph iscsi - Infrastructure
3. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" - Ceph - RADOS
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
8. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS

https://trello.com/c/vqKtv866/1831-wip-yuri5-testing-2023-08-25-1127-pacific

https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-08-25-1127-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/49138
4. https://tracker.ceph.com/issues/59193
5. https://tracker.ceph.com/issues/54071

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. rados/dashboard: Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning - Ceph - RADOS
4. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)

Pacific v16.2.14 https://tracker.ceph.com/issues/62527#note-1

https://pulpito.ceph.com/?sha1=21b2d401852937440c3da8ce2b19224181760caa

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/54071
3. https://tracker.ceph.com/issues/49138
4. https://tracker.ceph.com/issues/53827
5. https://tracker.ceph.com/issues/49287
6. https://tracker.ceph.com/issues/62557 -- new tracker
7. https://tracker.ceph.com/issues/59192
8. https://tracker.ceph.com/issues/62559 -- new tracker
9. https://tracker.ceph.com/issues/59193
10. https://tracker.ceph.com/issues/49727
11. https://tracker.ceph.com/issues/61193
12. https://tracker.ceph.com/issues/57386
13. https://tracker.ceph.com/issues/58946

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
3. blk/kernel/KernelDevice.cc: void KernelDevice::_aio_thread() Unexpected IO error - Ceph - Bluestore
4. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Orchestrator
5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
6. rados/dashboard: Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning - Ceph - RADOS
7. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
8. rados/cephadm/dashboard: test times out due to host stuck in maintenance mode - Ceph - Orchestrator
9. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
10. lazy_omap_stats_test: "ceph osd deep-scrub all" hangs - Ceph - RADOS
11. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore
12. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
13. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator

https://trello.com/c/4TiiTR0k/1821-wip-yuri7-testing-2023-08-16-1309-pacific-old-wip-yuri7-testing-2023-08-16-0933-pacific-old-wip-yuri7-testing-2023-08-15-0741-pa

https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-08-16-1309-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/59193
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/57386
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/55809
8. https://tracker.ceph.com/issues/53827

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
7. "Leak_IndirectlyLost" valgrind report on mon.c
8. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME?

https://trello.com/c/qRJogcXu/1827-wip-yuri2-testing-2023-08-16-1142-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-08-16-1142-pacific

Failures unrelated are tracked in:

1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/61602
3. https://tracker.ceph.com/issues/58946
4. https://tracker.ceph.com/issues/53789
5. https://tracker.ceph.com/issues/49287
6. https://tracker.ceph.com/issues/38577
7. https://tracker.ceph.com/issues/54071
8. https://tracker.ceph.com/issues/61907
9. https://tracker.ceph.com/issues/55443
10. https://tracker.ceph.com/issues/54372
11. https://tracker.ceph.com/issues/53246
12. https://tracker.ceph.com/issues/51282
13. https://tracker.ceph.com/issues/59124

1. pacific: test_cluster_info fails from "No daemons reported"
2. tasks/rados_cls_all: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. cephadm: KeyError: 'osdspec_affinity' - Ceph – Orchestrator
4. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail
5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
6. FAILED ceph_assert(!did_bind)
7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
8. api tests fail from "MDS_CLIENTS_LAGGY" warning
9. "SELinux denials found.." in rados run
10. No module named 'tasks'
11. rhel 8.4 and centos stream unable to install cephfs-java
12. pybind/mgr/mgr_util:.mgr pool may be created too early causing spurious PG_DEGRADED warnings
13. "Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" during quincy p2p upgrade test

https://trello.com/c/ONHeA3yz/1823-wip-yuri8-testing-2023-08-11-0834-pacific

https://pulpito-ng.ceph.com/runs/?branch=wip-yuri8-testing-2023-08-11-0834-pacific

Failures, related:
1. https://tracker.ceph.com/issues/62450 <---- New tracker
Failures, unrelated:
2. https://tracker.ceph.com/issues/61602
3. https://tracker.ceph.com/issues/57386
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/59192
6. https://tracker.ceph.com/issues/59193
7. https://tracker.ceph.com/issues/54071
8. https://tracker.ceph.com/issues/50371
9. https://tracker.ceph.com/issues/62456 <---- New tracker
10. https://tracker.ceph.com/issues/55444

Details:
1. pacific:task/test_nfs: TypeError: 'type' object is not subscriptable - Ceph - CephFS
2. tasks/rados_cls_all: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
6. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
8. Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - Ceph - RADOS
9. Could not get lock /var/cache/apt/archives/lock - open (11: Resource temporarily unavailable) - Infrastructure
10. test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test - Ceph - RBD

https://trello.com/c/Tw3vOIOj/1819-wip-yuri2-testing-2023-08-08-0755-pacific

https://pulpito-ng.ceph.com/runs/?branch=wip-yuri2-testing-2023-08-08-0755-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/58946
4. https://tracker.ceph.com/issues/53768
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/59193
7. https://tracker.ceph.com/issues/49287

Details:
1. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
4. timed out waiting for admin_socket to appear after osd.2 restart in thrasher/defaults workload/small-objects - Ceph - RADOS
5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
6. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator

https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-08-03-0807-pacific-old-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-p

https://pulpito-ng.ceph.com/runs/?branch=wip-yuri6-testing-2023-08-03-0807-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/49287
4. https://tracker.ceph.com/issues/61732
5. https://tracker.ceph.com/issues/61193
6. https://tracker.ceph.com/issues/62444
7. https://tracker.ceph.com/issues/59193
8. https://tracker.ceph.com/issues/58560
9. https://tracker.ceph.com/issues/54071

Details:
1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
4. pacific: test_cluster_info fails from "No daemons reported" - Ceph - CephFS
5. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore
6. RuntimeError: Failed command: /bin/podman version --format {{.Client.Version}}: 4.5.1 - Ceph - Orchestrator
7. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
8. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
9. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator

https://trello.com/c/QIYioTlt/1818-wip-yuri3-testing-2023-08-01-0825-pacific

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-08-01-0825-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/57386
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/53575
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/62403
8. https://tracker.ceph.com/issues/50371

Details:
1. test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. cephadm/osds: test tries and fails to pull luminous packages from chacra - Ceph - Orchestrator
8. Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - Ceph - RADOS

https://trello.com/c/8HALLv9T/1813-wip-yuri6-testing-2023-07-24-0819-pacific2-old-wip-yuri6-testing-2023-07-24-0819-pacific

https://pulpito.ceph.com/yuriw-2023-07-26_15:54:22-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-07-27_22:37:12-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/58946
4. https://tracker.ceph.com/issues/62235 - New
5. https://tracker.ceph.com/issues/53827
6. https://tracker.ceph.com/issues/59193
7. https://tracker.ceph.com/issues/54071
8. https://tracker.ceph.com/issues/49287
9. https://tracker.ceph.com/issues/49961

Details:
1. test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
4. Pacific: Assert failure: test_ceph_osd_pool_create_utf8
5. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME?
6. "Failed to fetch package version from https://shaman.ceph.com/api/search ..."
7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
8. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
9. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed

Failures, Possibly Related:
  1. https://pulpito.ceph.com/yuriw-2023-07-27_22:37:12-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/7354866
    The test failed due to the MDS_CLIENTS_LAGGY warning. Should this test add the warning to the ignore list? If so, the related
    PR: https://github.com/ceph/ceph/pull/52270 may need additional changes. This needs to be checked by the PR's author.
  2. https://pulpito.ceph.com/yuriw-2023-07-27_22:37:12-rados-wip-yuri6-testing-2023-07-24-0819-pacific-distro-default-smithi/7354890
    The following objectStore tests failed: ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest and
    ObjectStore/StoreTestSpecificAUSize.SpilloverFixed2Test. The failures may be related to either of the included PRs:
    https://github.com/ceph/ceph/pull/51418 and/or https://github.com/ceph/ceph/pull/51773 and must be looked into before
    merging either of them.

https://trello.com/c/VwH0NMiq/1807-wip-yuri11-testing-2023-07-18-0927-pacific

https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-07-18-0927-pacific

Failures, unrealted:
1. https://tracker.ceph.com/issues/61921
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/50222
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/61732
6. https://tracker.ceph.com/issues/59192
7. https://tracker.ceph.com/issues/62225
8. https://tracker.ceph.com/issues/54071
9. https://tracker.ceph.com/issues/59193

Details:
1. centos 8 builds fail because package ceph-iscsi-3.6-1.el8.noarch.rpm is not signed - Ceph - Infrastructure
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3, osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
5. test_cluster_info fails from "No daemons reported" - Ceph - CephFS
6. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
7. pacific upgrade test fails when upgrading OSDs due to degraded pgs - Ceph - RADOS
8. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
9. "Failed to fetch package version from https://shaman.ceph.com/api/search ..."

https://trello.com/c/qQnRTrLO/1792-wip-yuri8-testing-2023-06-22-1309-pacific-old-wip-yuri8-testing-2023-06-22-1004-pacific-old-wip-yuri8-testing-2023-06-22-0834-pa

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2023-06-22-1309-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/50042
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/59192
7. https://tracker.ceph.com/issues/54071
8. https://tracker.ceph.com/issues/59193

Details:
1. test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
3. rados/test.sh: api_watch_notify failures - Ceph - RADOS
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
6. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
8. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure

https://trello.com/c/MTsDz3Xr/1785-wip-yuri6-testing-2023-06-19-0853-pacific-old-wip-yuri6-testing-2023-06-14-0754-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2023-06-14-0754-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/61732
2. https://tracker.ceph.com/issues/49287
3. https://tracker.ceph.com/issues/59192
4. https://tracker.ceph.com/issues/54071
5. https://tracker.ceph.com/issues/59193
6. https://tracker.ceph.com/issues/57386

Details:
1. test_cluster_info fails from "No daemons reported" - Ceph - CephFS
2. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
4. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
5. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
6. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard

https://trello.com/c/Zzr2fzOP/1742-wip-yuri5-testing-2023-05-09-1324-pacific

https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-05-09-1324-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/59530
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/59193
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/59529
8. https://tracker.ceph.com/issues/57386
9. https://tracker.ceph.com/issues/49287
10. https://tracker.ceph.com/issues/55809
11. https://tracker.ceph.com/issues/50222

Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. mgr-nfs-upgrade: mds.foofs has 0/2 - Ceph - CephFS
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. mds_upgrade_sequence: overall HEALTH_ERR 1 filesystem with deprecated feature inline_data; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds - Ceph - CephFS
8. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
9. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
10. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS
11. osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS

https://trello.com/c/RFvB8Ugn/1741-wip-yuri11-testing-2023-04-25-1605-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri11-testing-2023-04-25-1605-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/59529
2. https://tracker.ceph.com/issues/49888
3. https://tracker.ceph.com/issues/59604
4. https://tracker.ceph.com/issues/57303
5. https://tracker.ceph.com/issues/55347
6. https://tracker.ceph.com/issues/59192
7. https://tracker.ceph.com/issues/49287
8. https://tracker.ceph.com/issues/54071
9. https://tracker.ceph.com/issues/57386
10. https://tracker.ceph.com/issues/50222
11. https://tracker.ceph.com/issues/58585
12. https://tracker.ceph.com/issues/61193

Details:
1. mds_upgrade_sequence: overall HEALTH_ERR 1 filesystem with deprecated feature inline_data; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds - Ceph - CephFS
2. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
3. upgrade: unkown ceph version causes upgrade to get stuck - Ceph - Orchestrator
4. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
5. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
6. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
8. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
9. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
10. osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
11. rook: failed to pull kubelet image - Ceph - Orchestrator
12. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore

Pacific RC v16.2.13

https://pulpito.ceph.com/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi
https://pulpito.ceph.com/yuriw-2023-05-07_00:42:29-rados-pacific-release-distro-default-smithi
https://pulpito.ceph.com/yuriw-2023-05-07_14:36:37-rados-pacific-release-distro-default-smithi

Failures:
1. https://tracker.ceph.com/issues/48965
2. https://tracker.ceph.com/issues/57386
3. https://tracker.ceph.com/issues/51282
4. https://tracker.ceph.com/issues/59192
5. https://tracker.ceph.com/issues/59193
6. https://tracker.ceph.com/issues/59678
7. https://tracker.ceph.com/issues/58893
8. https://tracker.ceph.com/issues/58585
9. https://tracker.ceph.com/issues/59530
10. https://tracker.ceph.com/issues/59529
11. https://tracker.ceph.com/issues/50371
12. https://tracker.ceph.com/issues/54071
13. https://tracker.ceph.com/issues/49287

Details:
1. qa/standalone/osd/osd-force-create-pg.sh: TEST_reuse_id: return 1 - Ceph - RADOS
2. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
3. pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings - Ceph - Mgr
4. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
5. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
6. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
7. test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired - Ceph - RADOS
8. rook: failed to pull kubelet image - Ceph - Orchestrator
9. mgr-nfs-upgrade: mds.foofs has 0/2 - Ceph - CephFS
10. mds_upgrade_sequence: overall HEALTH_ERR 1 filesystem with deprecated feature inline_data; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds - Ceph - CephFS
11. Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - Ceph - RADOS
12. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
13. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator

https://trello.com/c/EHxaPZSG/1737-wip-yuri5-testing-2023-04-25-0837-pacific-old-wip-yuri5-testing-2023-04-20-0810-pacific-old-wip-yuri5-testing-2023-04-19-0725-pa

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2023-04-25-0837-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/59192
2. https://tracker.ceph.com/issues/55347
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/59529
6. https://tracker.ceph.com/issues/59530
7. https://tracker.ceph.com/issues/57255

Details:
1. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
2. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. mds_upgrade_sequence: overall HEALTH_ERR 1 filesystem with deprecated feature inline_data; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds - Ceph - CephFS
6. mgr-nfs-upgrade: mds.foofs has 0/2 - Ceph - CephFS
7. rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon - Ceph - Orchestrator

https://trello.com/c/xbDVq4DL/1730-wip-yuri3-testing-2023-04-04-0833-pacific

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-04-04-0833-pacific

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-04-04-0833-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/49287
2. https://tracker.ceph.com/issues/53827
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/57386
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/58585
7. https://tracker.ceph.com/issues/59529
8. https://tracker.ceph.com/issues/59530
9. https://tracker.ceph.com/issues/61193 -- new tracker

Details:
1. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
2. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Infrastructure - Sepia
3. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
4. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
6. rook: failed to pull kubelet image - Ceph - Orchestrator
7. mds_upgrade_sequence: overall HEALTH_ERR 1 filesystem with deprecated feature inline_data; 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds - Ceph - CephFS
8. mgr-nfs-upgrade: mds.foofs has 0/2 - Ceph - CephFS
9. ObjectStore/StoreTest.SimpleCloneTest/2 times out from an abort in the objectstore log - Ceph - Bluestore

https://trello.com/c/QGQpEHQL/1710-wip-yuri6-testing-2023-03-12-0918-pacific-old-wip-yuri6-testing-2023-03-10-0853-pacific-old-wip-yuri6-testing-2023-03-09-1544-pa

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-03-12-0918-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/59192
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/59127
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/57386
7. https://tracker.ceph.com/issues/49525
8. https://tracker.ceph.com/issues/57255

Details:
1. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. Job that normally complete much sooner last almost 12 hours
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
6. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
7. found snap mapper error on pg 3.2s1 oid 3:4abe9991:::smithi10121515-14:e4 snaps missing in mapper, should be: dc was r -2...repaired - Ceph - RADOS
8. rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon - Ceph - Orchestrator

https://trello.com/c/Ra98Hszm/1682-wip-yuri4-testing-2023-02-03-1341-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2023-02-03-1341-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/54071
3. https://tracker.ceph.com/issues/49287
4. https://tracker.ceph.com/issues/58146
5. https://tracker.ceph.com/issues/57386
6. https://tracker.ceph.com/issues/58222
7. https://tracker.ceph.com/issues/58658 -- new tracker; unrelated to PRs in this run
8. https://tracker.ceph.com/issues/58659 -- new tracker; unrelated to PRs in this run

Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
3. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
4. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
5. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
6. `git --archive remote` Operation not supported by protocol - Ceph - Orchestrator
7. mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
8. mds_upgrade_sequence: failure when deploying node-exporter - Ceph - Orchestrator

Pacific v16.2.11

http://pulpito.front.sepia.ceph.com/yuriw-2023-01-13_20:42:41-rados-pacific_16.2.11_RC6.6-distro-default-smithi/

Failures:
1. https://tracker.ceph.com/issues/58258
2. https://tracker.ceph.com/issues/58146
3. https://tracker.ceph.com/issues/58458
4. https://tracker.ceph.com/issues/57303
5. https://tracker.ceph.com/issues/54071

Details:
1. rook: kubelet fails from connection refused - Ceph - Orchestrator
2. test_cephadm.sh: Error: Error initializing source docker://quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
3. qa/workunits/post-file.sh: : Permission denied - Ceph
4. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator

https://trello.com/c/AuBoWoSc/1664-wip-yuri2-testing-2022-12-07-0821-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2022-12-07-0821-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/58140
3. https://tracker.ceph.com/issues/58046
4. https://tracker.ceph.com/issues/58097
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/53501
7. https://tracker.ceph.com/issues/56770
8. https://tracker.ceph.com/issues/54992
9. https://tracker.ceph.com/issues/58232 - new tracker created; unrelated to PRs in this batch
10. https://tracker.ceph.com/issues/56028

Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. quay.ceph.io/ceph-ci/ceph: manifest unknown - Ceph - Orchestrator
3. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS
4. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
6. Exception when running 'rook' task. - Ceph - Orchestrator
7. crash: void OSDShard::register_and_wake_split_child(PG*): assert(p != pg_slots.end()) - Ceph - RADOS
8. pacific: rados/dashboard: tasks/dashboard: cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Mgr - Dashboard
9. Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable) - Infrastructure
10. thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) in src/test/osd/RadosModel.h - Ceph - RADOS

https://trello.com/c/EXEzImAi/1660-wip-yuri8-testing-2022-12-05-1031-pacific-old-wip-yuri8-testing-2022-12-01-0905-pacific

https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-12-01-0905-pacific
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-12-05-1031-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/53789
2. https://tracker.ceph.com/issues/53501
3. https://tracker.ceph.com/issues/58140
4. https://tracker.ceph.com/issues/58222 -- New tracker; not related to PRs in this batch
5. https://tracker.ceph.com/issues/57311
6. https://tracker.ceph.com/issues/58097
7. https://tracker.ceph.com/issues/58223 -- New tracker; not related to PRs in this batch
8. https://tracker.ceph.com/issues/48896
9. https://tracker.ceph.com/issues/58224 -- New tracker; not related to PRs in this batch
10. https://tracker.ceph.com/issues/54992
11. https://tracker.ceph.com/issues/57386
12. https://tracker.ceph.com/issues/58225

Details:
1. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
2. Exception when running 'rook' task. - Ceph - Orchestrator
3. quay.ceph.io/ceph-ci/ceph: manifest unknown - Ceph - Orchestrator
4. `git --archive remote` Operation not supported by protocol - Ceph - Orchestrator
5. rook: ensure CRDs are installed first - Ceph - Orchestrator
6. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
7. failure on `sudo fuser -v /var/lib/dpkg/lock-frontend` - Infrastructure
8. osd/OSDMap.cc: FAILED ceph_assert(osd_weight.count(i.first)) - Ceph - RADOS
9. cephadm/test_repos.sh: urllib.error.HTTPError: HTTP Error 504: Gateway Timeout - Ceph - Orchestrator
10. pacific: rados/dashboard: tasks/dashboard: cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Mgr - Dashboard
11. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
12. ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixCsumVsCompression/2 is killed before completing - Ceph - Bluestore

https://trello.com/c/zahAzjLl/1652-wip-yuri10-testing-2022-10-19-0810-old-wip-yuri10-testing-2022-10-18-1159

http://pulpito.front.sepia.ceph.com/yuriw-2022-10-21_15:23:05-rados-wip-yuri10-testing-2022-10-19-0810-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/57546
3. https://tracker.ceph.com/issues/52129
4. https://tracker.ceph.com/issues/57754
5. https://tracker.ceph.com/issues/57311
6. https://tracker.ceph.com/issues/57755

Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
3. LibRadosWatchNotify.AioWatchDelete failed - Ceph - RADOS
4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
5. rook: ensure CRDs are installed first - Ceph - Orchestrator
6. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator

https://trello.com/c/3N3eZexm/1646-wip-yuri5-testing-2022-10-10-0837-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-10-10-0837-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/57311
3. https://tracker.ceph.com/issues/54372
4. https://tracker.ceph.com/issues/57865 -- new Tracker; infrastructure-related
5. https://tracker.ceph.com/issues/54071

Details:
1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. rook: ensure CRDs are installed first - Ceph - Orchestrator
3. No module named 'tasks' - Infrastructure
4. cephadm/smoke-roleless: socket connection refused - Infrastructure
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator

https://trello.com/c/p6XkxixF/1645-wip-yuri4-testing-2022-10-05-0917-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-10-05-0917-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/57311
3. https://tracker.ceph.com/issues/53501
4. https://tracker.ceph.com/issues/57900 -- new Tracker; unrelated to PRs in this run
5. https://tracker.ceph.com/issues/43584 -- pending Pacific backport

Details:
1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. rook: ensure CRDs are installed first - Ceph - Orchestrator
3. Exception when running 'rook' task. - Ceph - Orchestrator
4. mon/crush_ops.sh: mons out of quorum - Ceph - RADOS
5. MON_DOWN during mon_join process - Ceph - RADOS

https://trello.com/c/e5ZdRi5Y/1635-wip-yuri5-testing-2022-09-20-1347-pacific-old-wip-yuri5-testing-2022-09-19-1007-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-09-19-1007-pacific
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-09-20-1347-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57268
2. https://tracker.ceph.com/issues/57255
3. https://tracker.ceph.com/issues/57689 -- new Tracker; unrelated to core PRs in this run
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/57303
6. https://tracker.ceph.com/issues/54992

Details:
1. rook: The CustomResourceDefinition "installations.operator.tigera.io" is invalid - Ceph - Orchestrator
2. rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon - Ceph - Orchestrator
3. cephadm/smoke-roleless: RuntimeError: dictionary changed size during iteration - Ceph - Orchestrator
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
6. pacific: rados/dashboard: tasks/dashboard: cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Mgr - Dashboard

https://trello.com/c/HEDIYzsj/1627-wip-yuri2-testing-2022-09-06-1007-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2022-09-06-1007-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/54071
2. https://tracker.ceph.com/issues/57386
3. https://tracker.ceph.com/issues/53827
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/57269
6. https://tracker.ceph.com/issues/56573
7. https://tracker.ceph.com/issues/57628 -- new Tracker; seems unrelated to the PRs tested in this run

Details:
1. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
2. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
3. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
4. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
5. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
6. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
7. osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0) - Ceph - RADOS

https://trello.com/c/LdsHbpT6/1628-wip-yuri5-testing-2022-09-09-1109-pacific-old-wip-yuri5-testing-2022-09-06-1334-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-09-06-1334-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57269
2. https://tracker.ceph.com/issues/54992
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/57255
5. https://tracker.ceph.com/issues/57482 -- new Tracker; unrelated to the PR in this run

Details:
1. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
2. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon - Ceph - Orchestrator
5. cephadm/smoke-roleless: nfs-ingress test times out - Ceph - Orchestrator

https://trello.com/c/ii0rganP/1616-wip-yuri4-testing-2022-08-24-0707-pacific

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-08-24-0707-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57269
2. https://tracker.ceph.com/issues/56149
3. https://tracker.ceph.com/issues/57207
4. https://tracker.ceph.com/issues/53939
5. https://tracker.ceph.com/issues/54071

Details:
1. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
2. thrash-erasure-code: AssertionError: wait_for_recovery: failed before timeout expired - Ceph - RADOS
3. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator

https://trello.com/c/spzqsh4u/1619-tchaikov-wip-pacific-update-fio-5

https://pulpito.ceph.com/?branch=tchaikov-wip-pacific-update-fio-5

Failures, unrelated:
1. https://tracker.ceph.com/issues/57269
2. https://tracker.ceph.com/issues/57207
3. https://tracker.ceph.com/issues/53939
4. https://tracker.ceph.com/issues/45721
5. https://tracker.ceph.com/issues/52124 -- pending Pacific backport
6. https://tracker.ceph.com/issues/57269
7. https://tracker.ceph.com/issues/54071
8. https://tracker.ceph.com/issues/55443
9. https://tracker.ceph.com/issues/53827

Details:
1. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
2. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
3. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestartor
4. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
6. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
7. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
8. "SELinux denials found.." in rados run - Infrastructure
9. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME?

https://trello.com/c/j3fwLnqf/1612-wip-yuri6-testing-2022-08-19-0940-pacific

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-08-19-0940-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/57207
2. https://tracker.ceph.com/issues/54071
3. https://tracker.ceph.com/issues/51904 -- was just merged to Pacific a few hours ago (not included in this run)
4. https://tracker.ceph.com/issues/52124 -- pending Pacific backport
5. https://tracker.ceph.com/issues/57268 -- created a new Tracker; unrealted to the PRs in this run
6. https://tracker.ceph.com/issues/53939
7. https://tracker.ceph.com/issues/55443
8. https://tracker.ceph.com/issues/54992
9. https://tracker.ceph.com/issues/56573
10. https://tracker.ceph.com/issues/53827
11. https://tracker.ceph.com/issues/57269 -- new Tracker opened; unrelated to the PRs in this run
12. https://tracker.ceph.com/issues/57267 -- new Tracker opened; another instance of this found on a different Pacific run. Unrealted.
13. https://tracker.ceph.com/issues/57255 -- new Tracker opened; unrelated to PRs in this run

Details:
1. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
2. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
3. test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to down PGs - Ceph - RADOS
4. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
5. rook: The CustomResourceDefinition "installations.operator.tigera.io" is invalid - Ceph - Orchestrator
6. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
7. "SELinux denials found.." in rados run - Infrastructure
8. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrators
9. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
10. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Infrastructure - Sepia
11. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
12. Valgrind reports memory "Leak_IndirectlyLost" errors on ceph-mon in "KeyServerData::get_caps" - Ceph - RADOS
13. rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon - Ceph - Orchestrator

https://trello.com/c/FfqukWFg/1604-wip-yuri3-testing-2022-08-11-0809-pacific

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-08-11-0809-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/55443
2. https://tracker.ceph.com/issues/53939
3. https://tracker.ceph.com/issues/53827
4. https://tracker.ceph.com/issues/52321
5. https://tracker.ceph.com/issues/57207 -- created a new Tracker; unrelated to PRs in this run
6. https://tracker.ceph.com/issues/56573
7. https://tracker.ceph.com/issues/53501
8. https://tracker.ceph.com/issues/54071
9. https://tracker.ceph.com/issues/54603
10. https://tracker.ceph.com/issues/49727 -- pending Pacific backport

Details:
1. "SELinux denials found.." in rados run - Ceph - Infrastructure
2. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
3. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Orchestrator
4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
5. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
6. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
7. Exception when running 'rook' task. - Ceph - Orchestrator
8. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
9. Valgrind reports memory "Leak_IndirectlyLost" errors on ceph-mon. - Ceph - RADOS
10. lazy_omap_stats_test: "ceph osd deep-scrub all" hangs - Ceph - RADOS

https://trello.com/c/MvDrHsu8/1599-wip-yuri6-testing-2022-08-04-0617-pacific

https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-08-04-0617-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/45721
2. https://tracker.ceph.com/issues/56573
3. https://tracker.ceph.com/issues/53501
4. https://tracker.ceph.com/issues/53939
5. https://tracker.ceph.com/issues/52321
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/56097

Details:
1. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
2. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
3. Exception when running 'rook' task. - Ceph - Orchestrator
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
7. Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats` - Ceph - RADOS

https://trello.com/c/62n7lFew/1583-wip-yuri2-testing-2022-07-15-0755-pacific

https://pulpito.ceph.com/?branch=wip-yuri2-testing-2022-07-15-0755-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/52124
2. https://tracker.ceph.com/issues/53294
3. https://tracker.ceph.com/issues/53501
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/53827
6. https://tracker.ceph.com/issues/49287
7. https://tracker.ceph.com/issues/53939
8. https://tracker.ceph.com/issues/43584
9. https://tracker.ceph.com/issues/52321
10. https://tracker.ceph.com/issues/56652 -- new Tracker; unrelated to PRs in this run
11. https://tracker.ceph.com/issues/49754
12. https://tracker.ceph.com/issues/55809
13. https://tracker.ceph.com/issues/54992
14. https://tracker.ceph.com/issues/54071
15. https://tracker.ceph.com/issues/56573

Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS
3. Exception when running 'rook' task. - Ceph - Orchestrator
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Orchestrator
6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
7. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
8. MON_DOWN during mon_join process - Ceph - RADOS -- pending pacific backport
9. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
10. pacific: cephadm/test_repos.sh: rllib.error.HTTPError: HTTP Error 504: Gateway Timeout - Ceph - Orchestrator
11. osd/OSD.cc: ceph_abort_msg("abort() called") during OSD::shutdown() - Ceph - RADOS
12. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS
13. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator
14. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
15. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator

https://trello.com/c/POOM8It4/1574-wip-yuri4-testing-2022-07-05-0719-pacific

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2022-07-05-0719-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/53939
2. https://tracker.ceph.com/issues/54071
3. https://tracker.ceph.com/issues/53501
4. https://tracker.ceph.com/issues/53294
5. https://tracker.ceph.com/issues/54992
6. https://tracker.ceph.com/issues/52321

Details:
1. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
2. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
3. Exception when running 'rook' task. - Ceph - Orchestrator
4. rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS
5. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator
6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook

https://trello.com/c/KcSaINZr/1564-wip-yuri3-testing-2022-06-22-1121-pacific

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/48029
2. https://tracker.ceph.com/issues/53855
3. https://tracker.ceph.com/issues/54992
4. https://tracker.ceph.com/issues/53939
5. https://tracker.ceph.com/issues/52124
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/51835 -- pending Pacific backport
8. https://tracker.ceph.com/issues/53501
9. https://tracker.ceph.com/issues/55741 -- pending Pacific backport
10. https://tracker.ceph.com/issues/51904

Details:
1. Exiting scrub checking -- not all pgs scrubbed. - Ceph - RADOS
2. rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount - Ceph - RADOS
3. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - RADOS
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - Mgr
8. Exception when running 'rook' task. - Ceph - Orchestrator
9. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
10, AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS

https://trello.com/c/JYh6U9zQ/1562-wip-yuri4-testing-2022-06-22-1415-pacific-old-wip-yuri4-testing-2022-06-21-0704-pacific

https://pulpito.ceph.com/yuriw-2022-06-21_16:28:27-rados-wip-yuri4-testing-2022-06-21-0704-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2022-06-22_01:17:46-rados-wip-yuri4-testing-2022-06-21-0704-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2022-06-23_21:29:45-rados-wip-yuri4-testing-2022-06-22-1415-pacific-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2022-06-24_15:13:42-rados-wip-yuri4-testing-2022-06-22-1415-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/53789
2. https://tracker.ceph.com/issues/55741
3. https://tracker.ceph.com/issues/53939
4. https://tracker.ceph.com/issues/53501
5. https://tracker.ceph.com/issues/54071
6. https://tracker.ceph.com/issues/55322
7. https://tracker.ceph.com/issues/52321
8. https://tracker.ceph.com/issues/45721
9. https://tracker.ceph.com/issues/54992
10. https://tracker.ceph.com/issues/45702
11. https://tracker.ceph.com/issues/56389 --> opened a new tracker for this; looks like an unrelated infrastructure bug
12. https://tracker.ceph.com/issues/49287
13. https://tracker.ceph.com/issues/56149
14. https://tracker.ceph.com/issues/56391
15. https://tracker.ceph.com/issues/56393 --> opened a new tracker for this; seems unrelated to PRs in this run

Details:
1. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
2. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
3. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
4. Exception when running 'rook' task. - Ceph - Orchestrator
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
6. test-restful.sh: mon metadata unable to be retrieved - Ceph - Mgr
7. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
8. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
9. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - RADOS
10. PGLog::read_log_and_missing: ceph_assert(miter missing.get_items().end() || (miter->second.need i->version && miter->second.have == eversion_t())) - Ceph - RADOS
11. Job for rsyslog.service failed because the service did not take the steps required by its unit configuration - Infrastructure
12. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
13. thrash-erasure-code: AssertionError: wait_for_recovery: failed before timeout expired - Ceph - RADOS
14. Teuthology jobs scheduled with rhel result in "the output has been hidden due to the fact that 'no_log: true' was specified for this result" - Infrastructure
15. thrash-erasure-code-big: failed to complete snap trimming before timeout - Ceph - RADOS

https://trello.com/c/DHiutx2W/1545-wip-yuri-testing-2022-06-10-0812-pacific-old-wip-yuri-testing-2022-06-08-1015-pacific-old-wip-yuri-testing-2022-06-06-1014-pacif

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-06-06-1014-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/54992
2. https://tracker.ceph.com/issues/53939
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/53501
5. https://tracker.ceph.com/issues/52321
6. https://tracker.ceph.com/issues/53855
7. https://tracker.ceph.com/issues/55741
8. https://tracker.ceph.com/issues/51234 -- pending Pacific backport

Details:
1. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchstrator
2. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. Exception when running 'rook' task. - Ceph - Orchestrator
5. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
6. rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount - Ceph - RADOS
7. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
8. LibRadosService.StatusFormat failed, Expected: (0) != (retry), actual: 0 vs 0 - Ceph - RADOS

https://trello.com/c/HLlt38JJ/1548-wip-yuri6-testing-2022-06-07-0955-pacific

http://pulpito.front.sepia.ceph.com/yuriw-2022-06-07_19:48:58-rados-wip-yuri6-testing-2022-06-07-0955-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/53939
2. https://tracker.ceph.com/issues/53827
3. https://tracker.ceph.com/issues/54071
4. https://tracker.ceph.com/issues/54992
5. https://tracker.ceph.com/issues/49525
6. https://tracker.ceph.com/issues/55741
7. https://tracker.ceph.com/issues/52321
8. https://tracker.ceph.com/issues/50222
9. https://tracker.ceph.com/issues/53501
10. https://tracker.ceph.com/issues/56028 --> opened a new Tracker for this; it is unrelated to the PRs that were tested in this run.

Details:
1. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
2. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Ceph - Orchestrator
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchstrator
5. found snap mapper error on pg 3.2s1 oid 3:4abe9991:::smithi10121515-14:e4 snaps missing in mapper, should be: dc was r -2...repaired - Ceph - RADOS (pending Pacific backport)
6. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
7. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
8. osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
9. Exception when running 'rook' task. - Ceph - Orchestrator
10. thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) in src/test/osd/RadosModel.h - Ceph - RADOS

https://trello.com/c/lCocnUTc/1538-wip-yuri2-testing-2022-06-03-1350-pacific-old-wip-yuri2-testing-2022-05-31-1300-pacific

http://pulpito.front.sepia.ceph.com/yuriw-2022-05-31_21:35:41-rados-wip-yuri2-testing-2022-05-31-1300-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-07_14:00:55-rados-wip-yuri2-testing-2022-06-03-1350-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/53501
2. https://tracker.ceph.com/issues/55322
3. https://tracker.ceph.com/issues/55741
4. https://tracker.ceph.com/issues/53939
5. https://tracker.ceph.com/issues/52321
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/51835
8. https://tracker.ceph.com/issues/49777
9. https://tracker.ceph.com/issues/54411
10. https://tracker.ceph.com/issues/54992

Details:
1. Exception when running 'rook' task. - Ceph - Orchestrator
2. test-restful.sh: mon metadata unable to be retrieved - Ceph - Mgr
3. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
6. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
7. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - Mgr
8. test_pool_min_size: 'check for active or peered' reached maximum tries (5) after waiting for 25 seconds - Ceph - RADOS
9. mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh - Ceph - CephFS
10. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator

https://trello.com/c/3OJLTTKF/1542-wip-yuri4-testing-2022-06-01-1350-pacific

http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_00:50:42-rados-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_14:44:32-rados-wip-yuri4-testing-2022-06-01-1350-pacific-distro-default-smithi/

Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/54992
3. https://tracker.ceph.com/issues/53939
4. https://tracker.ceph.com/issues/54071
5. https://tracker.ceph.com/issues/53501
6. https://tracker.ceph.com/issues/45318
7. https://tracker.ceph.com/issues/49888
8. https://tracker.ceph.com/issues/48965

Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator
3. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
4. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
5. Exception when running 'rook' task. - Ceph - Orchestrator
6. octopus: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running tasks/mon_clock_no_skews.yaml - Ceph - RADOS
7. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
8. qa/standalone/osd/osd-force-create-pg.sh: TEST_reuse_id: return 1 - Ceph - RADOS

Tip of Pacific: http://pulpito.front.sepia.ceph.com/?sha1=4fa079ba14503defa8dc257d7c2d506ebefcfe6d&suite=rados

Failures:
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/55444
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/55446
https://tracker.ceph.com/issues/54360
https://tracker.ceph.com/issues/55443
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/54411
https://tracker.ceph.com/issues/53939

Details:
1. Exception when running 'rook' task. - Ceph - Orchestrator
2. test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test - Ceph - RBD
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
4. mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command - Ceph
5. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator
6. "SELinux denials found.." in rados run - Infrastructure
7. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
8. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
9. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
10. mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh - Ceph - CephFS
11. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator

https://trello.com/c/xo1acXTi/1511-wip-yuri-testing-2022-04-20-0729-pacific

Failures, unrealted:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/54086
https://tracker.ceph.com/issues/53155

Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
3. MDSMonitor: assertion during upgrade to v16.2.5+ - Ceph - CephFS

NEW FAILURES:
Failures:
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/55444
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/53939
https://tracker.ceph.com/issues/54360
https://tracker.ceph.com/issues/55446
https://tracker.ceph.com/issues/54992

Details:
1. Exception when running 'rook' task. - Ceph - Orchestrator
2. test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test - Ceph - RBD
3. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchstrator
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator
6. mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command - Ceph
7. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - Orchestrator

https://trello.com/c/pr7HxZwn/1492-wip-yuri2-testing-2022-03-31-1523-pacific-old-wip-yuri2-testing-2022-03-30-1604-pacific-old-wip-yuri2-testing-2022-03-30-1108-pa

http://pulpito.front.sepia.ceph.com/yuriw-2022-04-01_01:23:52-rados-wip-yuri2-testing-2022-03-31-1523-pacific-distro-default-smithi/

Failures, unrelated:
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/55162
https://tracker.ceph.com/issues/54086
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/53939
https://tracker.ceph.com/issues/55163
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/54360

Details:
1. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
2. Exception when running 'rook' task. - Ceph - Orchestrator
3. cephadm/test_cephadm.sh: ERROR: A cluster with the same fsid already exists - Ceph - Orchestrator
4. Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
6. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
7. cephadm/thrash: "rotating keys expired way too early" leads to dead job - Ceph
8. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
9. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator

https://trello.com/c/89uZM5Ih/1489-wip-yuri7-testing-2022-03-24-1341-pacific

http://pulpito.front.sepia.ceph.com/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/

Failures, unrelated:
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/49777
https://tracker.ceph.com/issues/53939
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/51904
https://tracker.ceph.com/issues/52136

https://trello.com/c/3wh1pS3n/1463-wip-yuri3-testing-2022-02-28-0757-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-02-28-0757-pacific

Failures, unrelated:
https://tracker.ceph.com/issues/47838
https://tracker.ceph.com/issues/50042
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/54469
https://tracker.ceph.com/issues/53939
https://tracker.ceph.com/issues/54360

Details:
1. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
2. sporadic rados/test.sh failures - Ceph - RADOS
3. Exception when running 'rook' task. - Ceph - Orchestrator
4. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
5. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - RADOS
6. cephadm/smoke: "Post https://172.21.15.73:8443//api/prometheus_receiver: context deadline exceeded" leads to unresponsive manager - Ceph - Orchestrator
7. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
8. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph - Orchestrator

https://trello.com/c/p0mDgnjJ/1454-wip-yuri7-testing-2022-02-17-0852-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2022-02-17-0852-pacific

Failures, unrelated:
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/54337
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/54360
https://tracker.ceph.com/issues/54406
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/53939
https://tracker.ceph.com/issues/54411
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/50042

Details:
1. Exception when running 'rook' task. - Ceph - Orchestrator
2. Selinux denials seen on fs/rados teuthology runs
3. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
4. Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph
5. cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr - Ceph - CephFS
6. cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
7. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
8. mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem is degraded; insufficient standby MDS daemons available; 33 daemons have recently crashed" during suites/fsstress.sh - Ceph - CephFS
9. rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
10. sporadic rados/test.sh failures - Ceph - RADOS

yuriw-2021-11-12_01:49:32-rados-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi

http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_01:49:32-rados-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi/

[6498467, 6498469, 6498472] -- Command failed on smithi151 with status 1: 'sudo kubeadm init --node-name smithi151 --token abcdef.o537jowm0cpg231u --pod-network-cidr 10.252.176.0/21' -- tracked by https://tracker.ceph.com/issues/52116
[6498468] -- Command failed on smithi094 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash s' - could be related to https://tracker.ceph.com/issues/49465; seen in a recent pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-31_03:31:02-rados-pacific-distro-default-smithi/6469519/

Details:

Bug_#52116: kubeadm task fails with error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster - Ceph - Orchestrator
Bug_#49465: qa: Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_trim_caps' - Ceph - CephFS

wip-yuri8-testing-2021-11-02-1009-pacific

http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2021-11-02-1009-pacific

1. yuriw-2021-11-08_15:10:38-rados-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi

[6491056, 6491072, 6479074, 6479198, 6479318] -- Command failed on smithi123 with status 1: 'sudo kubeadm init --node-name smithi123 --token abcdef.ka3hot1x8ao2qams --pod-network-cidr 10.251.208.0/21' -- tracked by https://tracker.ceph.com/issues/52116; pending backport
[6491057, 6479097] -- Command failed on smithi180 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash s' - seen in a recent pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-31_03:31:02-rados-pacific-distro-default-smithi/6469519/
[6491063, 6491065, 6479228, 6479234, 6479237, 6479241] Could not reconnect to  -- potentially connected to https://tracker.ceph.com/issues/21317; also seen in a previous pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-24_03:31:02-rados-pacific-distro-default-smithi/6458348/
[6491064, 6479208, 6479305] -- Command failed on smithi198 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:99171c30d5ab04365da028b81526082bf02ab21b shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 41769c44-40de-11ec-8c2c-001a4aab830c - ceph mon dump f json' - tracked in https://tracker.ceph.com/issues/50280; seen in a recent pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-31_03:31:02-rados-pacific-distro-default-smithi/6469346/

2. yuriw-2021-11-02_19:49:55-rados-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi

[6479025] -- Command failed (workunit test rados/test.sh) on smithi158 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=99171c30d5ab04365da028b81526082bf02ab21b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' -- seen in a previous pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-09-05_03:31:01-rados-pacific-distro-basic-smithi/6375203/
[6479111] -- Found coredumps on  -- tracked by https://tracker.ceph.com/issues/53206
[6479124] -- "2021-11-04T15:42:16.771883+0000 mon.a (mon.0) 170 : cluster [WRN] Health check failed: Degraded data redundancy: 2/52 objects degraded (3.846%), 1 pg degraded (PG_DEGRADED)" in cluster log -- passed in a previous pacific run: http://pulpito.front.sepia.ceph.com/teuthology-2021-07-25_03:31:03-rados-pacific-distro-basic-smithi/6291213/; failed as well: http://pulpito.front.sepia.ceph.com/teuthology-2021-09-26_03:31:01-rados-pacific-distro-basic-smithi/6407869/
[6479127] -- Command failed (workunit test rados/test_python.sh) on smithi103 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=99171c30d5ab04365da028b81526082bf02ab21b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' -- seen in a previous pacific run: http://pulpito.front.sepia.ceph.com/yuriw-2021-09-10_14:43:59-rados-pacific-distro-basic-smithi/6383444/
[6479135] -- Command failed on smithi068 with status 125: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:99171c30d5ab04365da028b81526082bf02ab21b shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c48fddfe-3d85-11ec-8c28-001a4aab830c - ceph osd crush tunables default' -- RELATED to these issues; potentially tracked by https://tracker.ceph.com/issues/49962; similar failure seen in a previous pacific run: https://pulpito.ceph.com/yuriw-2021-04-07_17:37:43-fs-wip-yuri-testing-2021-04-07-0905-pacific-distro-basic-smithi/6027393/
[6479298] -- "2021-11-04T18:47:50.609640+0000 mon.b (mon.1) 255 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

3. yuriw-2021-11-02_19:46:34-fs-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi

[6478836, 6478863] -- "2021-11-03T02:57:36.398519+0000 mon.a (mon.0) 847 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
[6478980] -- "2021-11-03T05:41:04.442639+0000 mon.a (mon.0) 991 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
[6478908] -- "2021-11-03T04:37:58.992832+0000 mon.a (mon.0) 3562 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log
[6478884] -- Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) -- tracked in https://tracker.ceph.com/issues/52606; pending packport
[6478887] -- "2021-11-03T04:01:23.909671+0000 mds.i (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000001259 (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-24) see mds.i log and `damage ls` output for details" in cluster log -- tracked by https://tracker.ceph.com/issues/48805
[6478920] -- Command failed on smithi133 with status 1: "sudo nsenter --net=/var/run/netns/ceph-ns-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse f --admin-socket '/var/run/ceph/$cluster$name.$pid.asok' --id vol_data_isolated --client_mountpoint=/volumes/_nogroup/vol_isolated mnt.0" -- tracked by https://tracker.ceph.com/issues/51705
[6478962] -- Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair) -- tracked by https://tracker.ceph.com/issues/9466

4. yuriw-2021-11-02_19:45:24-rgw-wip-yuri8-testing-2021-11-02-1009-pacific-distro-basic-smithi

[6478780] -- Command failed on smithi037 with status 4: 'cd /home/ubuntu/cephtest && wget http://www-us.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz && tar xvf apache-maven-3.6.3-bin.tar.gz && git clone https://github.com/apache/hadoop && cd hadoop && git checkout -b hadoop-2.9.2 rel/release-2.9.2' - seen in a recent master run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-28_05:07:02-rgw-pacific-distro-default-smithi/6464576/
[6478782, 6478795, 6478806, 6478818] -- rgw multisite, rgw multisite pubsub, test failures -- tracked by https://tracker.ceph.com/issues/49955
[6478785, 6478809] -- Command failed on smithi129 with status 1: 'cd /home/ubuntu/cephtest/tempest && /home/ubuntu/cephtest/tox-venv/bin/tox e venv --notest' - tracked by https://tracker.ceph.com/issues/53095
[6478792] -- Command failed on smithi097 with status 2: 'cd /home/ubuntu/cephtest/ragweed && ./bootstrap' tracked by: https://tracker.ceph.com/issues/48735
[6478804] Command failed on smithi059 with status 4: 'cd /home/ubuntu/cephtest && wget http://www-us.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz && tar -xvf apache-maven-3.6.3-bin.tar.gz && git clone https://github.com/apache/hadoop && cd hadoop && git checkout -b hadoop-3.2.0 rel/release-3.2.0': seen in a recent master run: http://pulpito.front.sepia.ceph.com/teuthology-2021-10-28_05:07:02-rgw-pacific-distro-default-smithi/6464634/

Details:

Bug_#49962: 'sudo ceph --cluster ceph osd crush tunables default' fails due to valgrind: Unknown option: --exit-on-first-error=yes - Ceph - RADOS
Bug_#52116: kubeadm task fails with error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster - Ceph - Orchestrator
Bug_#21317: Update VPS with latest distro: RuntimeError: Could not reconnect to - Infrastructure - Sepia
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53206: Found coredumps on | IndexError: list index out of range - Tools - Teuthology
Bug_#52606: qa: test_dirfrag_limit - Ceph - CephFS
Bug_#48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" - Ceph - CephFS
Bug_#51705: pacific: qa: tasks.cephfs.fuse_mount:mount command failed - Ceph - CephFS
Bug_#9466: kclient: Extend CephFSTestCase tests to cover kclient - Ceph - CephFS

Updated by Laura Flores 3 months ago · 63 revisions