MAIN » History » Revision 123
« Previous |
Revision 123/157
(diff)
| Next »
Pere Díaz Bou, 01/24/2024 01:42 PM
MAIN¶
Summaries are ordered latest --> oldest.
https://trello.com/c/YriUU51p/1929-wip-yuri2-testing-2024-01-18-1314-pacific-old-wip-yuri2-testing-2024-01-12-1128-pacific¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/61159
2. https://tracker.ceph.com/issues/64126
Details:
1. ConnectionError: Failed to reconnect to smithi102
2. assert not need_to_install(ctx, client, need_install[client]
3. KeyError: 'package_manager_version' -> teuthology
4. log.info('Package version is %s', builder.version)
5. RuntimeError Failed command systemctl start ceph-056f088e-b9b4-11ee-95b1-87774f69a715@node-exporter.smithi152
6. teuthology.exceptions.MaxWhileTries: reached maximum tries (181) after waiting for 180 seconds -> ceph bootstrap cephadm
7. stderr Trying to pull docker.io/prom/alertmanager v0.20.0...
8. Error initializing source docker //prom/alertmanager v0.20.0 (Mirrors also failed [docker-mirror.front.sepia.ceph.com 5000/prom/alertmanager v0.20.0 pinging container registry docker-mirror.front.sepia.ceph.com 5000 Get "https //docker-mirror.front.sepia.ceph.com 5000/v2/" dial tcp 172.21.0.79 5000 connect no route to host]) docker.io/prom/alertmanager v0.20.0 reading manifest v0.20.0 in docker.io/prom/alertmanager toomanyrequests You have reached your pull rate limit. You may increase the limit by authenticating and upgrading https //www.docker.com/increase-rate-limit
9. Failed while placing alertmanager.smithi105on smithi105 cephadm exited with an error code 1
h3. https://trello.com/c/79noWWpu/1934-wip-yuri7-testing-2024-01-18-1327
Failures, unrelated:
1. https://tracker.ceph.com/issues/56788
2. https://tracker.ceph.com/issues/59380
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/17945
6. https://tracker.ceph.com/issues/64057
Details:
1. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore
2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
5. ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test - Ceph - RADOS
6. task/test_cephadm_timeout - failed with timeout - Ceph - Orchestrator
https://trello.com/c/Yjrx9ygD/1911-wip-yuri8-testing-2024-01-18-0823-old-wip-yuri8-testing-2023-12-15-0911¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/59196
3. https://tracker.ceph.com/issues/64118
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/64057
Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - RADOS
2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-quincy jammy Release' does not have a Release file. - Ceph - Orchestrator
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
5. task/test_cephadm_timeout - failed with timeout - Ceph - Orchestrator
https://trello.com/c/6oYH6qSe/1924-wip-yuri3-testing-2024-01-10-0735-old-wip-yuri3-testing-2024-01-09-1342¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/53420
4. non related. Maybe related to discussion in https://tracker.ceph.com/projects/ceph/wiki/CDM_01-FEB-2023
Details:
1. ['7520620', '7520463'] - ceph_test_lazy_omap_stats segfault
2. ['7520619', '7520474', '7520407', '7520546', '7520333', '7520475'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
3. ['7520530', '7520510'] - ansible: Unable to acquire the dpkg frontend
4. 7520472 - 'No health warning caused by timeout was raised'
https://trello.com/c/D0n65np1/1923-wip-yuri-testing-2024-01-09-1331¶
Failures, Dead unrelated:
1. https://tracker.ceph.com/issues/59196 - 7511654, 7511810
2. https://tracker.ceph.com/issues/63748 -7511595, 7511751
3. https://tracker.ceph.com/issues/61774 - 7511598, 7511665, 7511737, 7511772, 7511808
4. https://tracker.ceph.com/issues/55838 - 7511564, 7511722
5. https://tracker.ceph.com/issues/59142 - 7511640, 7511800
6. https://tracker.ceph.com/issues/49961 - 7511845
7. https://tracker.ceph.com/issues/49287 - 7511814
8. https://tracker.ceph.com/issues/59380 - 7511562, 7511720
Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. "download_cephadm" step fails - Ceph - Orchestrator
5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
6. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
8. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
https://trello.com/c/D0n65np1/1923-wip-yuri-testing-2024-01-09-1331#¶
14 (12+2) Failures, unrelated:
1. ['7518374','7518448','7518515','7518516','7518587','7518660']-valgrind issue that exists in pre-PR 'main'
2. ['7518504','7518661'] - https://tracker.ceph.com/issues/59196
3. ['7518565'] - https://tracker.ceph.com/issues/63967
4. ['7518427'] - non related. Seems to be an mgr issue.
5. ['7518513'] - non related. Maybe related to discussion in https://tracker.ceph.com/projects/ceph/wiki/CDM_01-FEB-2023
6. ['7518637'] - not related. Might be a test issue.
7. ['7518412','7518571'] - the unrelated dead.
Details:
1. 'vg_replca_malloc.cc in the monitor'. Exists in yuriw-2024-01-10_15:06:10-rados-wip-yuri-testing-2024-01-09-1331-distro-default-smithi.
2. ceph_test_lazy_omap_stats segfault.
3. deep_scrub. Solved by PR#55115.
4. 'TestCephadmCLI'
5. 'No health warning caused by timeout was raised'
6. 'expected MON_CLOCK_SKEW but got none'
7,8. Dead. Nothing related seen in the logs.
https://trello.com/c/haVLXC6K/1922-wip-yuri10-testing-2024-01-04-1245-old-wip-yuri10-testing-2024-01-03-1546¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2024-01-04-1245
Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59142
4. https://tracker.ceph.com/issues/63748
5. https://tracker.ceph.com/issues/55838
6. https://tracker.ceph.com/issues/59380
7. https://tracker.ceph.com/issues/63967 -- new tracker
8. https://tracker.ceph.com/issues/63784
9. https://tracker.ceph.com/issues/54369
Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
4. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
5. "download_cephadm" step fails - Ceph - Orchestrator
6. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
7. qa/tasks/ceph.py: "ceph tell <pgid> deep_scrub" fails - Ceph - RADOS
8. qa/standalone/mon/mkfs.sh:'mkfs/a' already exists and is not empty: monitor may already exist - Ceph - Orchestrator
9. mon/test_mon_osdmap_prune.sh: jq .osdmap_first_committed 11 -eq 20 - Ceph - RADOS
https://trello.com/c/fbyEb3kx/1917-wip-yuri6-testing-2024-01-02-0832¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/55838
2. https://tracker.ceph.com/issues/63748
3. https://tracker.ceph.com/issues/61786
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/56028
6. https://tracker.ceph.com/issues/63501
7. https://tracker.ceph.com/issues/61774
8. https://tracker.ceph.com/issues/37660
9. https://tracker.ceph.com/issues/59380
10. https://tracker.ceph.com/issues/62975
Details
1. cephadm/osds: Exception with "test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm"
2. qa/workunits/post-file.sh: Couldn't create directory
3. test_dashboard_e2e.sh: Can't run because no spec files were found; couldn't determine Mocha version
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean
5. thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) in src/test/osd/RadosModel.h
6. ceph::common::leak_some_memory() got interpreted as an actual leak
7. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
8. smithi195:'Failing rest of playbook due to missing NVMe card'
9. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
10.site-packages/paramiko/channel.py: OSError: Socket is closed
https://trello.com/c/6sy6WQmn/1915-wip-yuri10-testing-2023-12-23-0824¶
Failures, unrelated:
1. ['7501399', '7501410'] - https://tracker.ceph.com/issues/59196
2. ['7501404', '7501394'] - https://tracker.ceph.com/issues/59380
3. ['7501409', '7501397', '7501406', '7501401', '7501400'] - https://tracker.ceph.com/issues/61774
4. ['7501395', '7501405'] - No open issued identified yet.
5. ['7501408', '7501398'] - https://tracker.ceph.com/issues/59142
6. ['7501396', '7501407'] - https://tracker.ceph.com/issues/63748
Details:
1. ceph_test_lazy_omap_stats segfault
2. ados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. no issue yet. cephadm failed ('test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000...)
5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
6. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
https://trello.com/c/lTLi41v6/1905-wip-yuri-testing-2023-12-19-1112-old-wip-yuri-testing-2023-12-13-1239-old-wip-yuri-testing-2023-12-11-1524¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59142
2. https://tracker.ceph.com/issues/63748
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/63748
5. https://tracker.ceph.com/issues/59380
6. https://tracker.ceph.com/issues/63778
Details:
1. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
6. Upgrade: failed due to an unexpected exception - Ceph - Orchestrator
https://trello.com/c/NCZCeuVu/1894-wip-yuri10-testing-2023-12-12-1229-old-wip-yuri10-testing-2023-12-07-1728-old-wip-yuri10-testing-2023-12-05-1105-old-wip-yuri10¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59142
2. https://tracker.ceph.com/issues/63748
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/63887 -- new tracker
6. https://tracker.ceph.com/issues/59196
7. https://tracker.ceph.com/issues/63783
Details:
1. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
2. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. ados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
5. Starting alertmanager fails from missing container - Ceph - Orchestrator
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7. mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name' - Ceph - RGW
https://trello.com/c/k9RNURve/1897-wip-yuri3-testing-2023-12-07-0727-old-wip-yuri3-testing-wip-neorados-learning-from-experience¶
7493218 - https://tracker.ceph.com/issues/63783 (known issue)
7493242
7493228
——————
7493219 - https://tracker.ceph.com/issues/6177 (known issue)
7493224
7493230
7493236
7493237
7493243
———————
7493221 - https://tracker.ceph.com/issues/59142 (known issue)
———————
7493223 - https://tracker.ceph.com/issues/59196 (known issue)
———————
7493229 - https://tracker.ceph.com/issues/63748 (known issue)
7493244
———————
7493232 - https://tracker.ceph.com/issues/63785 (known issue)
https://trello.com/c/ZgZGBobg/1902-wip-yuri8-testing-2023-12-11-1101-old-wip-yuri8-testing-2023-12-06-1425¶
Unrelated failures:
1. https://tracker.ceph.com/issues/63748
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/63783
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/59142
6. https://tracker.ceph.com/issues/59196
Details:
1. ['7487680', '7487836'] - qa/workunits/post-file.sh: Couldn't create directory
2. ['7487541', '7487610', '7487750', '7487751', '7487683', '7487822'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
3. ['7487677', '7487816', '7487532'] - mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name'
4. ['7487804', '7487806', '7487647'] - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
5. ['7487724', '7487568'] - mgr/dashboard: fix e2e for dashboard v3
6. ['7487579', '7487739'] - cephtest bash -c ceph_test_lazy_omap_stats
https://trello.com/c/OeXUIG19/1898-wip-yuri2-testing-2023-12-06-1239-old-wip-yuri2-testing-2023-12-04-0902¶
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-12-06-1239
Failures, unrelated:
1. https://tracker.ceph.com/issues/63748
2. https://tracker.ceph.com/issues/56788
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/63783
5. https://tracker.ceph.com/issues/63784 -- new tracker
6. https://tracker.ceph.com/issues/59196
7. https://tracker.ceph.com/issues/63785 -- new tracker
8. https://tracker.ceph.com/issues/59380
9. https://tracker.ceph.com/issues/63788
10. https://tracker.ceph.com/issues/63778
11. https://tracker.ceph.com/issues/63789
Details:
1. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
2. crash: void KernelDevice::_aio_thread(): abort - Ceph - Bluestore
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name' - Ceph - RGW
5. qa/standalone/mon/mkfs.sh:'mkfs/a' already exists and is not empty: monitor may already exist - Ceph - RADOS
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7. cephadm/test_adoption.sh: service not found - Ceph - Orchestrator
8. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
9. Cephadm tests fail from "nothing provides lua-devel needed by ceph-2:19.0.0-44.g2d90d175.el8.x86_64" - Ceph - RGW
10. Upgrade: failed due to an unexpected exception - Ceph - Orchestrator
11. LibRadosIoEC test failure - Ceph - RADOS
https://trello.com/c/wK3QrkV2/1901-wip-yuri-testing-2023-12-06-1240¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-12-06-1240
Failures, unrelated:
1. https://tracker.ceph.com/issues/63783
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/59142
6. https://tracker.ceph.com/issues/63748
7. https://tracker.ceph.com/issues/63785 -- new tracker
8. https://tracker.ceph.com/issues/63786 -- new tracker
Details:
1. mgr: 'ceph rgw realm bootstrap' fails with KeyError: 'realm_name' - Ceph - RGW
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
6. qa/workunits/post-file.sh: Couldn't create directory - Infrastructure
7. cephadm/test_adoption.sh: service not found - Ceph - Orchestrator
8. rados_cls_all: TestCls2PCQueue.MultiProducer hangs - Ceph - RGW
https://trello.com/c/tUEWtLfq/1892-wip-yuri7-testing-2023-11-17-0819¶
Failures being analyzed:
1. '7467376' - ?
2.
Failures, unrelated:
1. ['7467380','7467367','7467378'] - timeout on test_cls_2pc_queue ->> https://tracker.ceph.com/issues/62449
2. ['7467370','7467374','7467387','7467375'] - Valgrind: mon (Leak_StillReachable) ->> https://tracker.ceph.com/issues/61774
3. ['7467388','7467373'] - ceph_test_lazy_omap_stats ->> https://tracker.ceph.com/issues/59196
4. ['7467385','7467372'] - failure in e2e-spec ->> https://tracker.ceph.com/issues/48406
5. ['7467371'] - unrelated. Test infra issues.
6. ['7467379','7467369'] - RGW 'realm_name' ->> https://tracker.ceph.com/issues/63499
7. ['7467377','7467366'] - seem to be a disk space issue.
8. ['7467381'] - test environment
https://pulpito.ceph.com/yuriw-2023-10-24_00:11:03-rados-wip-yuri2-testing-2023-10-23-0917-distro-default-smithi/¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/49961
4. https://tracker.ceph.com/issues/62449
5. https://tracker.ceph.com/issues/48406
6. https://tracker.ceph.com/issues/63121
7. https://tracker.ceph.com/issues/47838
8. https://tracker.ceph.com/issues/62777
9. https://tracker.ceph.com/issues/54372
10. https://tracker.ceph.com/issues/63500 <--------- New tracker
Details:
1. ['7435483','7435733'] - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. ['7435516','7435570','7435765','7435905'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. ['7435520'] - scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS
4. ['7435568','7435875'] - test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
5. ['7435713','7436026'] - cephadm/test_dashboard_e2e.sh: error when testing orchestrator/04-osds.e2e-spec.ts - Ceph - Mgr - Dashboard
6. ['7435741'] - objectstore/KeyValueDB/KVTest.RocksDB_estimate_size tests failing - Ceph - RADOS
7. ['7435855'] - mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
8. ['7435767', '7435636','7435971'] - rados/valgrind-leaks: expected valgrind issues and found none - Ceph - RADOS
9. ['7435999'] - No module named 'tasks' - Infrastructure
10. ['7435995'] - No module named 'tasks.nvme_loop' - Infrastructure
https://pulpito.ceph.com/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59380
2. https://tracker.ceph.com/issues/59142
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/53767
5. https://tracker.ceph.com/issues/59196
6. https://tracker.ceph.com/issues/62535
Details:
1. ['7441165', '7441319'] - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
2. ['7441240', '7441396'] - mgr/dashboard: fix e2e for dashboard v3
3. ['7441266', '7441336', '7441129', '7441267', '7441201'] - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. ['7441167', '7441321'] - qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
5. ['7441250', '7441096'] - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
6. ['7441374'] - cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator
https://pulpito.ceph.com/yuriw-2023-10-27_19:03:28-rados-wip-yuri8-testing-2023-10-27-0825-distro-default-smithi/¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/59196
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/62449
4. https://tracker.ceph.com/issues/59192
5. https://tracker.ceph.com/issues/48406
6. https://tracker.ceph.com/issues/62776
Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
4. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
5. cephadm/test_dashboard_e2e.sh: error when testing orchestrator/04-osds.e2e-spec.ts - Ceph - Mgr - Dashboard
6. rados/basic: 2 pools do not have an application enabled - Ceph - RADOS
Not in Trello but still a rados suite
https://pulpito.ceph.com/ksirivad-2023-10-13_01:58:36-rados-wip-ksirivad-fix-63183-distro-default-smithi/¶
Failures, unrelated:
7423809 - https://tracker.ceph.com/issues/63198 <<-- New Tracker
7423821, 7423972 - https://tracker.ceph.com/issues/59142 - mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
7423826, 7423979 - https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7423849 - https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
7423875 - Failed to get package from Shaman (infra failure)
7423896, 7424047 - https://tracker.ceph.com/issues/62449 - test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure
7423913 - https://tracker.ceph.com/issues/62119 - timeout on reserving replica
7423918 - https://tracker.ceph.com/issues/61787 - Command "ceph --cluster ceph osd dump --format=json" times out when killing OSD
7423980 - https://tracker.ceph.com/issues/62557 - Teuthology test failure due to "MDS_CLIENTS_LAGGY" warning
7423982 - https://tracker.ceph.com/issues/63121 - KeyValueDB/KVTest.RocksDB_estimate_size tests failing
7423984 - tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
7423995 - https://tracker.ceph.com/issues/62777 - rados/valgrind-leaks: expected valgrind issues and found none
7424052 - https://tracker.ceph.com/issues/55809 - "Leak_IndirectlyLost" valgrind report on mon.c
https://trello.com/c/PuCOnhYL/1841-wip-yuri5-testing-2023-10-02-1105-old-wip-yuri5-testing-2023-09-27-0959¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-10-02-1105
Failures, unrelated:
1. https://tracker.ceph.com/issues/52624
2. https://tracker.ceph.com/issues/59380
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/59196
5. https://tracker.ceph.com/issues/62449
6. https://tracker.ceph.com/issues/59142
Details:
1. qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" - Ceph - RADOS
2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
5. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
6. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
https://trello.com/c/tGZO7hNK/1832-wip-yuri6-testing-2023-08-28-1308¶
Failures, unrelated:- https://tracker.ceph.com/issues/59142
- https://tracker.ceph.com/issues/59196
- https://tracker.ceph.com/issues/55347
- https://tracker.ceph.com/issues/62084
- https://tracker.ceph.com/issues/62975 <<<---- New Tracker
- https://tracker.ceph.com/issues/62449
- https://tracker.ceph.com/issues/61774
- https://tracker.ceph.com/issues/53345
- mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
- ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
- SELinux Denials during cephadm/workunits/test_cephadm
- task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
- site-packages/paramiko/channel.py: OSError: Socket is closed - Infrastructure
- test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
- centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
Individual testing for https://github.com/ceph/ceph/pull/53344¶
http://pulpito.front.sepia.ceph.com/?branch=wip-lflores-testing-2-2023-09-08-1755
Failures, unrelated:
1. https://tracker.ceph.com/issues/57628
2. https://tracker.ceph.com/issues/62449
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/53345
5. https://tracker.ceph.com/issues/59142
6. https://tracker.ceph.com/issues/59380
Details:
1. osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0) - Ceph - RADOS
2. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
6. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
https://trello.com/c/JxeRJYse/1822-wip-yuri4-testing-2023-08-10-1739¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-08-10-1739
Failures, unrelated:
1. https://tracker.ceph.com/issues/62084
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/62449
5. https://tracker.ceph.com/issues/62777
6. https://tracker.ceph.com/issues/59196
7. https://tracker.ceph.com/issues/59380
8. https://tracker.ceph.com/issues/58946
Details:
1. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - CephFS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
5. rados/valgrind-leaks: expected valgrind issues and found none - Ceph - RADOS
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
8. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
https://trello.com/c/Wt1KTViI/1830-wip-yuri-testing-2023-08-25-0809¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-08-25-0809
Failures, unrelated:
1. https://tracker.ceph.com/issues/62776
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/62084
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/59380
6. https://tracker.ceph.com/issues/62449
7. https://tracker.ceph.com/issues/59196
Details:
1. rados/basic: 2 pools do not have an application enabled - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
6. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
https://trello.com/c/Fllj7bVM/1833-wip-yuri8-testing-2023-08-28-1340¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-08-28-1340
Failures, unrelated:
1. https://tracker.ceph.com/issues/62728
2. https://tracker.ceph.com/issues/62084
3. https://tracker.ceph.com/issues/59142
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/62449
6. https://tracker.ceph.com/issues/61774
7. https://tracker.ceph.com/issues/59196
Details:
1. Host key for server xxx does not match - Infrastructure
2. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
3. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
5. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
6. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
https://trello.com/c/pMEWaauy/1825-wip-yuri3-testing-2023-08-15-0955¶
Failures, unrelated:
7370238 - https://tracker.ceph.com/issues/59196
7370242, 7370322 - https://tracker.ceph.com/issues/62482
7370245,7370298 - https://tracker.ceph.com/issues/62084
7370250, 7370317 - https://tracker.ceph.com/issues/53767
7370274, 7370343 - https://tracker.ceph.com/issues/61519
7370263 - https://tracker.ceph.com/issues/62713 (New tracker)
7370285, 7370286 - https://tracker.ceph.com/issues/61774
DEAD jobs, unrelated:
7370249, 7370316 https://tracker.ceph.com/issues/59380
https://trello.com/c/i87i4GUf/1826-wip-yuri10-testing-2023-08-17-1444-old-wip-yuri10-testing-2023-08-15-1601-old-wip-yuri10-testing-2023-08-15-1009¶
Failures, unrelated:
7376678, 7376832 - https://tracker.ceph.com/issues/61786
7376687 - https://tracker.ceph.com/issues/59196
7376699 - https://tracker.ceph.com/issues/55347
7376739 - https://tracker.ceph.com/issues/61229
7376742, 7376887 - https://tracker.ceph.com/issues/62084
7376758, 7376914 - https://tracker.ceph.com/issues/62449
7376756, 7376912 - https://tracker.ceph.com/issues/59380
https://tracker.ceph.com/issues/61774
https://trello.com/c/MEs20HAJ/1828-wip-yuri11-testing-2023-08-17-0823¶
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-08-17-0823
Failures, unrelated:
1. https://tracker.ceph.com/issues/47838
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/62535 -- new tracker
7. https://tracker.ceph.com/issues/62084
8. https://tracker.ceph.com/issues/62449
Details:
1. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
6. cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator
7. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
8. test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure - Ceph - RGW
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-20-0727¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/62084
2. https://tracker.ceph.com/issues/61161
3. https://tracker.ceph.com/issues/61774
4. https://tracker.ceph.com/issues/62167
5. https://tracker.ceph.com/issues/62212
6. https://tracker.ceph.com/issues/58946
7. https://tracker.ceph.com/issues/59196
8. https://tracker.ceph.com/issues/59380
Details:
1. task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
2. Creating volume group 'vg_nvme' failed - Ceph - Ceph-Ansible
3. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
4. FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
5. cannot create directory ‘/home/ubuntu/cephtest/archive/audit’: No such file or directory - Tools - Teuthology
6. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
7. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
8. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
WIP https://trello.com/c/1JlLNnGN/1812-wip-yuri5-testing-2023-07-24-0814¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-07-24-0814
Failures:
1. 7350969, 7351069 -> https://tracker.ceph.com/issues/59192
2. 7350972, 7351023 -> https://tracker.ceph.com/issues/62073
3. 7350977, 7351059 -> https://tracker.ceph.com/issues/53767 ? to verify
4. 7350983, 7351016, 7351019 -> https://tracker.ceph.com/issues/61774 (valgrind)
5. 7351004, 7351090 -> https://tracker.ceph.com/issues/61519
6. 7351084 -> (selinux)
Dead:
7. 7350974, 7351053 -> no relevant info.
Details:
1. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
2. AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
3. e2e - to verify
4. valgrind issues. Won't be analyzed as irrelevant to this PR.
5. mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
6. selinux issue:
https://trello.com/c/GM3omhGs/1803-wip-yuri5-testing-2023-07-14-0757-old-wip-yuri5-testing-2023-07-12-1122¶
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-07-14-0757
Failures:
1. 7341711 -> https://tracker.ceph.com/issues/62073 -> AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
2. 7341716 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'
3. 7341717 -> https://tracker.ceph.com/issues/62073 -> AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd'
4. 7341720 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity'
Dead:
1. 7341712 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
2. 7341719 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
https://trello.com/c/6no1SSqS/1762-wip-yuri2-testing-2023-07-17-0957-old-wip-yuri2-testing-2023-07-15-0802-old-wip-yuri2-testing-2023-07-13-1236-old-wip-yuri2-test¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri2-testing-2023-07-15-0802
Failures:
1. https://tracker.ceph.com/issues/61774 -- valgrind leak in centos 9; not major outside of qa but needs to be suppressed
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/59192
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/61385
6. https://tracker.ceph.com/issues/62073
Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
5. TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected: true # in actual: false" - Ceph - RADOS
6. AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
https://trello.com/c/Vbn2qwgo/1793-wip-yuri-testing-2023-07-14-1641-old-wip-yuri-testing-2023-07-12-1332-old-wip-yuri-testing-2023-07-12-1140-old-wip-yuri-testing¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-07-14-1641
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/59380
3. https://tracker.ceph.com/issues/62073
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' - Ceph - CephFS
https://trello.com/c/7y6uj4bo/1800-wip-yuri6-testing-2023-07-10-0816¶
Failures:
7332480 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
7332558 -> https://tracker.ceph.com/issues/57302 -> Test failure: test_create_access_permissions (tasks.mgr.dashboard.test_pool.PoolTest)
7332565 -> https://tracker.ceph.com/issues/57754 -> test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
7332612 -> https://tracker.ceph.com/issues/57754 -> test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
7332613 -> https://tracker.ceph.com/issues/55347 -> SELinux Denials during cephadm/workunits/test_cephadm
7332636 -> https://tracker.ceph.com/issues/58946 -> cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
Deads:
7332357 -> https://tracker.ceph.com/issues/61164 -> Error reimaging machines: reached maximum tries (100) after waiting for 600 seconds
7332405 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
7332559 -> https://tracker.ceph.com/issues/59380 -> rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
https://trello.com/c/BHAY6fGO/1801-wip-yuri10-testing-2023-07-10-1345¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-07-10-1345
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/50242
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/59380
Details :
1 cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. test_repair_corrupted_obj fails with assert not inconsistent
3. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)"
https://trello.com/c/bn3IMWEB/1783-wip-yuri7-testing-2023-06-23-1022-old-wip-yuri7-testing-2023-06-12-1220-old-wip-yuri7-testing-2023-06-09-1607¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-06-23-1022
Failures, unrelated:
1. https://tracker.ceph.com/issues/59380
2. https://tracker.ceph.com/issues/57754
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/59057
5. https://tracker.ceph.com/issues/57754
6. https://tracker.ceph.com/issues/55347
7. https://tracker.ceph.com/issues/58946
8. https://tracker.ceph.com/issues/61951 -- new tracker
Details:
1. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
2. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
5. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
6. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
7. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
8. cephadm: OrchestratorError: Must set public_network config option or specify a CIDR network, ceph addrvec, or plain IP - Ceph - Orchestrator
https://trello.com/c/BVxlgRvT/1782-wip-yuri5-testing-2023-06-28-1515-old-wip-yuri5-testing-2023-06-21-0750-old-wip-yuri5-testing-2023-06-16-1012-old-wip-yuri5-test¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2023-06-28-1515
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/59380
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/57754
5. https://tracker.ceph.com/issues/59057
6. https://tracker.ceph.com/issues/61897
7. https://tracker.ceph.com/issues/61940 -- new tracker
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
6. qa: rados:mgr fails with MDS_CLIENTS_LAGGY - Ceph - CephFS
7. "test_cephfs_mirror" fails from stray cephadm daemon - Ceph - Orchestrator
2023 Jun 23¶
https://pulpito.ceph.com/rishabh-2023-06-21_22:15:54-rados-wip-rishabh-improvements-authmon-distro-default-smithi/
https://pulpito.ceph.com/rishabh-2023-06-22_10:54:41-rados-wip-rishabh-improvements-authmon-distro-default-smithi/
- https://tracker.ceph.com/issues/58946
cephadm: KeyError: 'osdspec_affinity' - https://tracker.ceph.com/issues/57754
test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - https://tracker.ceph.com/issues/61784
test_envlibrados_for_rocksdb.sh: '~ubuntu-toolchain-r' user or team does not exist - https://tracker.ceph.com/issues/61832
osd-scrub-dump.sh: ERROR: Extra scrubs after test completion...not expected
https://trello.com/c/CcKXkHLe/1789-wip-yuri3-testing-2023-06-19-1518¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2023-06-19-1518
Failures, unrelated:
1. https://tracker.ceph.com/issues/59057
2. https://tracker.ceph.com/issues/59380
3. https://tracker.ceph.com/issues/58946
Details:
1. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' - Ceph - RADOS
2. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
https://trello.com/c/Zbp7w1yE/1770-wip-yuri10-testing-2023-06-02-1406-old-wip-yuri10-testing-2023-05-30-1244¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2023-06-02-1406
Failures, unrelated:
1. https://tracker.ceph.com/issues/46877
2. https://tracker.ceph.com/issues/59057
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/59380
Details:
1. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none - Ceph - RADOS
2. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
https://trello.com/c/lyHYQLgL/1771-wip-yuri11-testing-2023-05-30-1325¶
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-05-30-1325
Failures, unrelated:
1. https://tracker.ceph.com/issues/59678
2. https://tracker.ceph.com/issues/55347
3. https://tracker.ceph.com/issues/59380
4. https://tracker.ceph.com/issues/61519
5. https://tracker.ceph.com/issues/61225
Details:
1. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Infrastructure
2. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
3. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
4. mgr/dashboard: fix test_dashboard_e2e.sh failure - Ceph - Mgr - Dashboard
5. TestClsRbd.mirror_snapshot failure - Ceph - RBD
https://trello.com/c/8FwhCHxc/1774-wip-yuri-testing-2023-06-01-0746¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2023-06-01-0746
Failures, unrelated:
1. https://tracker.ceph.com/issues/59380
2. https://tracker.ceph.com/issues/61578 -- new tracker
3. https://tracker.ceph.com/issues/59192
4. https://tracker.ceph.com/issues/61225
5. https://tracker.ceph.com/issues/59057
Details:
1. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
2. test_dashboard_e2e.sh: Can't run because no spec files were found - Ceph - Mgr - Dashboard
3. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
4. TestClsRbd.mirror_snapshot failure - Ceph - RBD
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
https://trello.com/c/g4OvqEZx/1766-wip-yuri-testing-2023-05-26-1204¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-05-26-1204
Failures, unrelated:
1. https://tracker.ceph.com/issues/61386
2. https://tracker.ceph.com/issues/61497 -- new tracker
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/59057
6. https://tracker.ceph.com/issues/55347
Details:
1. TEST_recovery_scrub_2: TEST FAILED WITH 1 ERRORS - Ceph - RADOS
2. ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported - Dashboard
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
6. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
Remaining 4 Rook test failures are due to repo http://apt.kubernetes.io is not signed
- The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
https://trello.com/c/1LQJnuRh/1759-wip-yuri8-testing-2023-05-23-0802¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-23-0802
Failures, unrelated:
1. https://tracker.ceph.com/issues/61225
2. https://tracker.ceph.com/issues/61402
3. https://tracker.ceph.com/issues/59678
4. https://tracker.ceph.com/issues/59057
5. https://tracker.ceph.com/issues/59380
Details:
1. TestClsRbd.mirror_snapshot failure - Ceph - RBD
2. test_dashboard_e2e.sh: AssertionError: Timed out retrying after 120000ms: Expected to find content: '/^smithi160$/' within the selector: 'datatable-body-row datatable-body-cell:nth-child(2)' but never did. - Ceph - Mgr - Dashboard
3. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
4. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
https://trello.com/c/J04nAx3y/1756-wip-yuri11-testing-2023-05-19-0836¶
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-05-19-0836
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/61256
3. https://tracker.ceph.com/issues/59380
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/61225
6. https://tracker.ceph.com/issues/59057
7. https://tracker.ceph.com/issues/58560
8. https://tracker.ceph.com/issues/57755
9. https://tracker.ceph.com/issues/55347
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. Upgrade test fails after prometheus_receiver connection is refused - Ceph - Orchestrator
3. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
5. TestClsRbd.mirror_snapshot failure - Ceph - RBD
6. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
7. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
8. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
9. SELinux Denials during cephadm/workunits/test_cephadm - Infrastructure
https://trello.com/c/nvyHvlZ4/1745-wip-yuri8-testing-2023-05-10-1402¶
There was an RGW multisite test failure, but it turned out to be related
to an unmerged PR in the batch, which was dropped.
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-10-1402
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-05-18-1232
Failures, unrelated:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/59196
3. https://tracker.ceph.com/issues/61225
4. https://tracker.ceph.com/issues/58585
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/61256 -- new tracker
7. https://tracker.ceph.com/issues/59380
8. https://tracker.ceph.com/issues/59333
Details:
1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
4. rook: failed to pull kubelet image - Ceph - Orchestrator
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
6. Upgrade test fails after prometheus_receiver connection is refused - Ceph - Orchestrator
7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
8. PgScrubber: timeout on reserving replicas - Ceph - RADOS
https://trello.com/c/lM1xjBe0/1744-wip-yuri-testing-2023-05-10-0917¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-05-10-0917
Failures, unrelated:
1. https://tracker.ceph.com/issues/61225
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/44889
4. https://tracker.ceph.com/issues/59193
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/58560
7. https://tracker.ceph.com/issues/49287
8. https://tracker.ceph.com/issues/55347
9. https://tracker.ceph.com/issues/59380
10. https://tracker.ceph.com/issues/59192
11. https://tracker.ceph.com/issues/61261 -- new tracker
12. https://tracker.ceph.com/issues/61262 -- new tracker
13. https://tracker.ceph.com/issues/46877
Details:
1. TestClsRbd.mirror_snapshot failure - Ceph - RBD
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. workunit does not respect suite_branch when it comes to checkout sha1 on remote host - Tools - Teuthology
4. "Failed to fetch package version from https://shaman.ceph.com/api/search ..." - Infrastructure
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
6. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
8. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
9. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
10. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
11. test_cephadm.sh: missing container image causes job to fail - Ceph - Orchestrator
12: Cephadm task times out when waiting for osds to come up - Ceph - Orchestrator
13. mon_clock_skew_check: expected MON_CLOCK_SKEW but got none - Ceph - RADOS
https://trello.com/c/1EFSeXDn/1752-wip-yuri10-testing-2023-05-16-1243¶
https://pulpito.ceph.com/?branch=wip-yuri10-testing-2023-05-16-1243
Also one failure related to http://archive.ubuntu.com/ubuntu that seems transient.
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/44889
3. https://tracker.ceph.com/issues/59678
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/61225 -- new tracker
7. https://tracker.ceph.com/issues/59380
8. https://tracker.ceph.com/issues/49888
9. https://tracker.ceph.com/issues/59192
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. workunit does not respect suite_branch when it comes to checkout sha1 on remote host - Tools - Teuthology
3. rados/test_envlibrados_for_rocksdb.sh: Error: Unable to find a match: snappy-devel - Ceph - RADOS
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
6. TestClsRbd.mirror_snapshot failure - Ceph - RBD
7. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
8. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
9. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
https://trello.com/c/AjBYBGYC/1738-wip-yuri7-testing-2023-04-19-1343-old-wip-yuri7-testing-2023-04-19-0721-old-wip-yuri7-testing-2023-04-18-0818¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-04-19-1343
Failures, unrelated:
1. https://tracker.ceph.com/issues/57755
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/49888
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/57754
6. https://tracker.ceph.com/issues/55347
7. https://tracker.ceph.com/issues/49287
Details:
1. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
3. rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum tries (3650) after waiting for 21900 seconds - Ceph - RADOS
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RGW
5. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
6. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
7. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
https://trello.com/c/YN2r7OyK/1740-wip-yuri3-testing-2023-04-25-1147¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-04-25-1147
Failures, unrelated:
1. https://tracker.ceph.com/issues/59049
2. https://tracker.ceph.com/issues/59192
3. https://tracker.ceph.com/issues/59335
4. https://tracker.ceph.com/issues/59380
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/50371
7. https://tracker.ceph.com/issues/59057
8. https://tracker.ceph.com/issues/53345
Details:
1. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
2. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
3. Found coredumps on smithi related to sqlite3Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
4. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
6. Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - Ceph - RADOS
7. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
8. Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
https://trello.com/c/YhSdHR96/1728-wip-yuri2-testing-2023-03-30-0826¶
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-03-30-0826
Failures, unrelated:
1. https://tracker.ceph.com/issues/51964
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/58758
4. https://tracker.ceph.com/issues/58585
5. https://tracker.ceph.com/issues/59380 -- new tracker
6. https://tracker.ceph.com/issues/59080
7. https://tracker.ceph.com/issues/59057
8. https://tracker.ceph.com/issues/59196
Details:
1. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
3. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
4. rook: failed to pull kubelet image - Ceph - Orchestrator
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
6. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
7. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
8. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
https://trello.com/c/wCN5TQud/1729-wip-yuri4-testing-2023-03-31-1237¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-31-1237
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2023-04-07-1825
Failures, unrelated:
1. https://tracker.ceph.com/issues/17945
2. https://tracker.ceph.com/issues/59049
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/56393
5. https://tracker.ceph.com/issues/58946
6. https://tracker.ceph.com/issues/49287
7. https://tracker.ceph.com/issues/55347
8. https://tracker.ceph.com/issues/59057
9. https://tracker.ceph.com/issues/59380
Details:
1. ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test - Ceph - RADOS
2. WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event - Ceph - RADOS
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. failed to complete snap trimming before timeout - Ceph - RADOS
5. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
7. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
8. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
9. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
https://trello.com/c/8Xlz4rIH/1727-wip-yuri11-testing-2023-03-31-1108-old-wip-yuri11-testing-2023-03-28-0950¶
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-03-28-0950
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-03-31-1108
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58946
3. https://tracker.ceph.com/issues/58265
4. https://tracker.ceph.com/issues/59271
5. https://tracker.ceph.com/issues/59057
6. https://tracker.ceph.com/issues/59333
7. https://tracker.ceph.com/issues/59334
8. https://tracker.ceph.com/issues/59335
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
3. TestClsRbd.group_snap_list_max_read failure during upgrade/parallel tests - Ceph - RBD
4. mon: FAILED ceph_assert(osdmon()->is_writeable()) - Ceph - RADOS
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
6. PgScrubber: timeout on reserving replicas - Ceph - RADOS
7. test_pool_create_with_quotas: Timed out after 60s and 0 retries - Ceph - Mgr - Dashboard
8. Found coredumps on smithi related to sqlite3 - Ceph - Cephsqlite
https://trello.com/c/yauI7omb/1726-wip-yuri7-testing-2023-03-29-1100-old-wip-yuri7-testing-2023-03-28-0942¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-03-29-1100
Failures, unrelated:
1. https://tracker.ceph.com/issues/59192
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/59057
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/55347
6. https://tracker.ceph.com/issues/59196
7. https://tracker.ceph.com/issues/47838
8. https://tracker.ceph.com/issues/59080
Details:
1. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
5. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
6. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
7. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
8. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
https://trello.com/c/epwSlEHP/1722-wip-yuri4-testing-2023-03-25-0714-old-wip-yuri4-testing-2023-03-24-0910¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-25-0714
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/59196
3. https://tracker.ceph.com/issues/59271
4. https://tracker.ceph.com/issues/58560
5. https://tracker.ceph.com/issues/51964
6. https://tracker.ceph.com/issues/58560
7. https://tracker.ceph.com/issues/59192
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
2. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
3. mon: FAILED ceph_assert(osdmon()->is_writeable()) - Ceph - RADOS
4. rook: failed to pull kubelet image - Ceph - Orchestrator
5. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
6. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Ceph - RADOS
7. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
https://trello.com/c/PEo71l9g/1720-wip-aclamk-bs-elastic-shared-blob-save-25032023-a-old-wip-lflores-testing-2023-03-22-2113¶
http://pulpito.front.sepia.ceph.com/?branch=wip-aclamk-bs-elastic-shared-blob-save-25.03.2023-a
Failures, unrelated:
1. https://tracker.ceph.com/issues/59058
2. https://tracker.ceph.com/issues/56034
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/59172
5. https://tracker.ceph.com/issues/56192
6. https://tracker.ceph.com/issues/49287
7. https://tracker.ceph.com/issues/58758
8. https://tracker.ceph.com/issues/58946
9. https://tracker.ceph.com/issues/59057
10. https://tracker.ceph.com/issues/59192
Details:
1. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
2. qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3() - Ceph - RADOS
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. test_pool_min_size: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
5. crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty()) - Ceph - RADOS
6. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found
7. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
8. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
9. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
10. cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
https://trello.com/c/Qa8vTuf8/1717-wip-yuri4-testing-2023-03-15-1418¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-15-1418
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/56393
3. https://tracker.ceph.com/issues/59123
4. https://tracker.ceph.com/issues/58585
5. https://tracker.ceph.com/issues/58560
6. https://tracker.ceph.com/issues/59127
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. thrash-erasure-code-big: failed to complete snap trimming before timeout - Ceph - RADOS
3. Timeout opening channel - Infrastructure
4. rook: failed to pull kubelet image - Ceph - Orchestrator
5. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
6. Job that normally complete much sooner last almost 12 hours - Infrastructure
https://trello.com/c/fo5GZ0YC/1712-wip-yuri7-testing-2023-03-10-0830¶
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-03-10-0830
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/59079
3. https://tracker.ceph.com/issues/59080
4. https://tracker.ceph.com/issues/58585
6. https://tracker.ceph.com/issues/59057
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. AssertionError: timeout expired in wait_for_all_osds_up - Ceph - RADOS
3. mclock-config.sh: TEST_profile_disallow_builtin_params_modify fails when $res == $opt_val_new - Ceph - RADOS
4. rook: failed to pull kubelet image - Ceph - Orchestrator
5. rados/test_envlibrados_for_rocksdb.sh: No rule to make target 'rocksdb_env_librados_test' on centos 8 - Ceph - RADOS
https://trello.com/c/EbLKJDPm/1685-wip-yuri11-testing-2023-03-08-1220-old-wip-yuri11-testing-2023-03-01-1424-old-wip-yuri11-testing-2023-02-20-1329-old-wip-yuri11¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri11-testing-2023-03-08-1220
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58946
4. https://tracker.ceph.com/issues/49287
5. https://tracker.ceph.com/issues/57755
6. https://tracker.ceph.com/issues/52316
7. https://tracker.ceph.com/issues/58496
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
6. qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons) - Ceph - RADOS
7. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
https://trello.com/c/u5ydxGCS/1698-wip-yuri7-testing-2023-02-27-1105¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2023-02-27-1105
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/57754
4. https://tracker.ceph.com/issues/50786
5. https://tracker.ceph.com/issues/49287
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
3. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
4. UnicodeDecodeError: 'utf8' codec can't decode byte - Ceph - RADOS
5. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
https://trello.com/c/hIlO2MJn/1706-wip-yuri8-testing-2023-03-07-1527¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2023-03-07-1527
Failures, unrelated:
1. https://tracker.ceph.com/issues/49287
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58560
4. https://tracker.ceph.com/issues/58946
5. https://tracker.ceph.com/issues/51964
Details:
1. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
4. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
5. qa: test_cephfs_mirror_restart_sync_on_blocklist failure - Ceph - CephFS
https://trello.com/c/bLUA7Wf5/1705-wip-yuri4-testing-2023-03-08-1234-old-wip-yuri4-testing-2023-03-07-1351¶
https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-08-1234
Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/49287
Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
Main baseline 2/24/23¶
https://pulpito.ceph.com/?sha1=f9d812a56231a14fafcdfb339f87d3d9a9e6e55f
Failures:
1. https://tracker.ceph.com/issues/58560
2. https://tracker.ceph.com/issues/57771
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/58475
5. https://tracker.ceph.com/issues/58758
6. https://tracker.ceph.com/issues/58797
7. https://tracker.ceph.com/issues/58893 -- new tracker
8. https://tracker.ceph.com/issues/49428
9. https://tracker.ceph.com/issues/55347
10. https://tracker.ceph.com/issues/58800
Details:
1. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
2. orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails - Ceph - Orchestrator
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21
5. qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' - Ceph - CephFS
6. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test"
7. test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired - Ceph - RADOS
8. ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed with error -22" - Ceph - RADOS
9. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
10. ansible: Failed to update apt cache: unknown reason - Infrastructure - Sepia
https://trello.com/c/t4cQVOvQ/1695-wip-yuri10-testing-2023-02-22-0848¶
http://pulpito.front.sepia.ceph.com:80/yuriw-2023-02-22_21:31:50-rados-wip-yuri10-testing-2023-02-22-0848-distro-default-smithi
http://pulpito.front.sepia.ceph.com:80/yuriw-2023-02-23_16:14:52-rados-wip-yuri10-testing-2023-02-22-0848-distro-default-smithi
Failures, unrelated:
1. https://tracker.ceph.com/issues/57754
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/58797
Details:
1. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test"
https://trello.com/c/hrTt8qIn/1693-wip-yuri6-testing-2023-02-24-0805-old-wip-yuri6-testing-2023-02-21-1406¶
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2023-02-24-0805
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58797
4. https://tracker.ceph.com/issues/58744
5. https://tracker.ceph.com/issues/58475
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
4. qa: intermittent nfs test failures at nfs cluster creation - Ceph - CephFS
5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
https://trello.com/c/gleu2p6U/1689-wip-yuri-testing-2023-02-22-2037-old-wip-yuri-testing-2023-02-16-0839¶
https://pulpito.ceph.com/yuriw-2023-02-16_22:44:43-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi
https://pulpito.ceph.com/lflores-2023-02-20_21:22:20-rados-wip-yuri-testing-2023-02-16-0839-distro-default-smithi
https://pulpito.ceph.com/yuriw-2023-02-23_16:42:54-rados-wip-yuri-testing-2023-02-22-2037-distro-default-smithi
https://pulpito.ceph.com/lflores-2023-02-23_17:54:36-rados-wip-yuri-testing-2023-02-22-2037-distro-default-smithi
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58496
4. https://tracker.ceph.com/issues/49961
5. https://tracker.ceph.com/issues/58861
6. https://tracker.ceph.com/issues/58797
7. https://tracker.ceph.com/issues/49428
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
4. scrub/osd-recovery-scrub.sh: TEST_recovery_scrub_1 failed - Ceph - RADOS
5. OSError: cephadm config file not found - Ceph - Orchestrator
6. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
7. ceph_test_rados_api_snapshots fails with "rados_mon_command osd pool create failed with error -22" - Ceph - RADOS
https://trello.com/c/FzMz7O3S/1683-wip-yuri10-testing-2023-02-15-1245-old-wip-yuri10-testing-2023-02-06-0846-old-wip-yuri10-testing-2023-02-06-0809¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2023-02-15-1245
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585
2. https://tracker.ceph.com/issues/58475
3. https://tracker.ceph.com/issues/58797 -- new tracker; seen in main baseline, therefore unrelated to trackers in this batch
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
3. scrub/osd-scrub-dump.sh: TEST_recover_unexpected fails from "ERROR: Unexpectedly low amount of scrub reservations seen during test" - Ceph - RADOS
https://trello.com/c/buZUPZx0/1680-wip-yuri2-testing-2023-02-08-1429-old-wip-yuri2-testing-2023-02-06-1140-old-wip-yuri2-testing-2023-01-26-1532¶
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-01-26-1532
Failures, unrelated:
1. https://tracker.ceph.com/issues/58496 -- fix in progress
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58475
4. https://tracker.ceph.com/issues/57754
5. https://tracker.ceph.com/issues/49287
6. https://tracker.ceph.com/issues/57731
7. https://tracker.ceph.com/issues/54829
8. https://tracker.ceph.com/issues/52221
Details:
1. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
7. crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t*) const: assert(num_down_in_osds <= num_in_osds) - Ceph - RADOS
8. crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end()) - Ceph - RADOS
https://trello.com/c/GA6hud1j/1674-wip-yuri-testing-2023-01-23-0926-old-wip-yuri-testing-2023-01-12-0816-old-wip-yuri-testing-2023-01-11-0818-old-wip-yuri-testing¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-01-23-0926
Failures, unrelated:
1. https://tracker.ceph.com/issues/58587 -- new tracker
2. https://tracker.ceph.com/issues/58585
3. https://tracker.ceph.com/issues/58098 -- fix merged to latest main
4. https://tracker.ceph.com/issues/58256 -- fix merged to latest main
5. https://tracker.ceph.com/issues/57900
6. https://tracker.ceph.com/issues/58475
7. https://tracker.ceph.com/issues/58560
Details:
1. test_dedup_tool.sh: test_dedup_object fails when pool 'dedup_chunk_pool' does not exist - Ceph - RADOS
2. rook: failed to pull kubelet image - Ceph - Orchestrator
3. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
4. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
5. mon/crush_ops.sh: mons out of quorum - Ceph - RADOS
6. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
7. test_envlibrados_for_rocksdb.sh failed to subscrib repo - Ceph
https://trello.com/c/583LyrTc/1667-wip-yuri2-testing-2023-01-23-0928-old-wip-yuri2-testing-2023-01-12-0816-old-wip-yuri2-testing-2023-01-11-0819-old-wip-yuri2-test¶
https://pulpito.ceph.com/?branch=wip-yuri2-testing-2023-01-23-0928
Failures, unrelated:
1. https://tracker.ceph.com/issues/58585 -- new tracker
2. https://tracker.ceph.com/issues/58256 -- fix merged to latest main
3. https://tracker.ceph.com/issues/58475
4. https://tracker.ceph.com/issues/57754 -- closed
5. https://tracker.ceph.com/issues/57546 -- fix is in testing
Details:
1. rook: failed to pull kubelet image - Ceph - Orchestrator
2. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
3. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
5. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
main baseline review -- https://pulpito.ceph.com/yuriw-2023-01-12_20:11:41-rados-main-distro-default-smithi/¶
Failures:
1. https://tracker.ceph.com/issues/58098 -- fix is in testing; holdup is issues with rhel satellite
2. https://tracker.ceph.com/issues/58258
3. https://tracker.ceph.com/issues/56000
4. https://tracker.ceph.com/issues/57632 -- fix is awaiting a review from the core team
5. https://tracker.ceph.com/issues/58475 -- new tracker
6. https://tracker.ceph.com/issues/57731
7. https://tracker.ceph.com/issues/58476 -- new tracker
8. https://tracker.ceph.com/issues/57303
9. https://tracker.ceph.com/issues/58256 -- fix is in testing
10. https://tracker.ceph.com/issues/57546 -- fix is in testing
11. https://tracker.ceph.com/issues/58496 -- new tracker
Details:
1. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
2. rook: kubelet fails from connection refused - Ceph - Orchestrator
3. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
4. test_envlibrados_for_rocksdb: free(): invalid pointer - Ceph - RADOS
5. test_dashboard_e2e.sh: Conflicting peer dependency: postcss@8.4.21 - Ceph - Mgr - Dashboard
6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
7. test_non_existent_cluster: cluster does not exist - Ceph - Orchestrator
8. qa/workunits/post-file.sh: postfile@drop.ceph.com: Permission denied - Ceph
9. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - RADOS
10. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
11. osd/PeeringState: FAILED ceph_assert(!acting_recovery_backfill.empty()) - Ceph - RADOS
https://trello.com/c/Mi1gMNFu/1662-wip-yuri-testing-2022-12-06-1204¶
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-12-06-1204
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-12-12-1136
Failures, unrelated:
1. https://tracker.ceph.com/issues/58096
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/58173
4. https://tracker.ceph.com/issues/52129
5. https://tracker.ceph.com/issues/58097
6. https://tracker.ceph.com/issues/57546
7. https://tracker.ceph.com/issues/58098
8. https://tracker.ceph.com/issues/57731
9. https://tracker.ceph.com/issues/55606
10. https://tracker.ceph.com/issues/58256
11. https://tracker.ceph.com/issues/58258
Details:
1. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. api_aio_pp: failure on LibRadosAio.SimplePoolEIOFlag and LibRadosAio.PoolEIOFlag - Ceph - RADOS
4. LibRadosWatchNotify.AioWatchDelete failed - Ceph - RADOS
5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Ceph - RADOS
6. rook: ensure CRDs are installed first - Ceph - Orchestrator
7. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
8. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
9. [ERR] Unhandled exception from module ''devicehealth'' while running on mgr.y: unknown - Ceph - CephSqlite
10. ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2: Expected: (logger->get(l_bluefs_slow_used_bytes)) >= (16 * 1024 * 1024), actual: 0 vs 16777216 - Ceph - Bluestore
11. rook: kubelet fails from connection refused - Ceph - Orchestrator
https://trello.com/c/8pqA5fF3/1663-wip-yuri3-testing-2022-12-06-1211¶
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-12-06-1211
Failures, unrelated:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/58098
3. https://tracker.ceph.com/issues/58096
4. https://tracker.ceph.com/issues/52321
5. https://tracker.ceph.com/issues/57731
6. https://tracker.ceph.com/issues/57546
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
3. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
5. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
6. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
https://trello.com/c/QrtToRWE/1643-wip-yuri6-testing-2022-11-23-1348-old-wip-yuri6-testing-2022-10-05-0912-old-wip-yuri6-testing-2022-09-29-0908¶
https://pulpito.ceph.com/?branch=wip-yuri6-testing-2022-11-23-1348
Failures, unrelated:
1. https://tracker.ceph.com/issues/58098
2. https://tracker.ceph.com/issues/58096
3. https://tracker.ceph.com/issues/57311
4. https://tracker.ceph.com/issues/58097
5. https://tracker.ceph.com/issues/57731
6. https://tracker.ceph.com/issues/57311
7. https://tracker.ceph.com/issues/51945
Details:
1. qa/workunits/rados/test_crash.sh: crashes are never posted - Ceph - RADOS
2. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
3. rook: ensure CRDs are installed first - Ceph - Orchestrator
4. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
5. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
7. qa/workunits/mon/caps.sh: Error: Expected return 13, got 0 - Ceph - RADOS
https://trello.com/c/hdiNA6Zq/1651-wip-yuri7-testing-2022-10-17-0814¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri7-testing-2022-10-17-0814
Failures:, unrelated
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/52657
4. https://tracker.ceph.com/issues/57935
5. https://tracker.ceph.com/issues/58097
6. https://tracker.ceph.com/issues/58096
7. https://tracker.ceph.com/issues/58098
8. https://tracker.ceph.com/issues/57731
9. https://tracker.ceph.com/issues/58098
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
3. MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
4. all test jobs get stuck at "Running task ansible.cephlab..." - Infrstructure - Sepia
5. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
6. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
7. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
8. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
9. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
https://trello.com/c/h2f7yhfz/1657-wip-yuri4-testing-2022-11-10-1051¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-11-10-1051
Failures, unrelated:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/58097
3. https://tracker.ceph.com/issues/55347
4. https://tracker.ceph.com/issues/57731
5. https://tracker.ceph.com/issues/57790
6. https://tracker.ceph.com/issues/52321
7. https://tracker.ceph.com/issues/58046
8. https://tracker.ceph.com/issues/54372
9. https://tracker.ceph.com/issues/56000
10. https://tracker.ceph.com/issues/58098
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. qa/workunits/post-file.sh: kex_exchange_identification: read: Connection reset by peer - Infrastructure
3. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
4. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
5. Unable to locate package libcephfs1 - Infrastructure
6. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
7. qa/workunits/rados/test_librados_build.sh: specify redirect in curl command - Ceph - RADOS
8. No module named 'tasks' - Infrastructure
9. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - Orchestrator
10. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
https://trello.com/c/zahAzjLl/1652-wip-yuri10-testing-2022-11-22-1711-old-wip-yuri10-testing-2022-11-10-1137-old-wip-yuri10-testing-2022-10-19-0810-old-wip-yuri10¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-10-19-0810
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri10-testing-2022-11-22-1711
Failures:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/58096 -- new tracker; unrelated to PR in this test batch
3. https://tracker.ceph.com/issues/57311
4. https://tracker.ceph.com/issues/58097 -- new tracker; unrelated to PR in this test batch
5. https://tracker.ceph.com/issues/58098 -- new tracker; unrelated to PR in this test batch
6. https://tracker.ceph.com/issues/57731
7. https://tracker.ceph.com/issues/57546
8. https://tracker.ceph.com/issues/52129
9. https://tracker.ceph.com/issues/57754
10. https://tracker.ceph.com/issues/57755
11. https://tracker.ceph.com/issues/58099 -- new tracker; flagged, but ultimately deemed unrelated by PR author
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. test_cluster_set_reset_user_config: NFS mount fails due to missing ceph directory - Ceph - Orchestrator
3. rook: ensure CRDs are installed first - Ceph - Orchestrator
4. qa/workunits/post-file.sh: Connection reset by peer - Ceph - RADOS
5. qa/workunits/rados/test_crash.sh: workunit checks for crashes too early - Ceph - RADOS
6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
7. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
8. LibRadosWatchNotify.AioWatchDelete failed - Ceph - RADOS
9. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Infrastructure
10. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
11. ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixPreferDeferred/2 fails - Ceph - Bluestore
https://trello.com/c/Jm1c0Z5d/1631-wip-yuri4-testing-2022-09-27-1405-old-wip-yuri4-testing-2022-09-20-0734-old-wip-yuri4-testing-2022-09-14-0617-old-wip-yuri4-test¶
http://pulpito.front.sepia.ceph.com/?branch=wip-all-kickoff-r
Failures, unrelated:
1. https://tracker.ceph.com/issues/57386
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/57731
4. https://tracker.ceph.com/issues/57311
5. https://tracker.ceph.com/issues/50042
6. https://tracker.ceph.com/issues/57546
Details:
1. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
4. rook: ensure CRDs are installed first - Ceph - Orchestrator
5. rados/test.sh: api_watch_notify failures - Ceph - RADOS
6. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
https://trello.com/c/K7im36rK/1632-wip-yuri7-testing-2022-09-27-0743-old-wip-yuri7-testing-2022-09-26-0828-old-wip-yuri7-testing-2022-09-07-0820¶
http://pulpito.front.sepia.ceph.com/?branch=wip-lflores-testing
Failures, unrelated:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/57754 -- created a new Tracker; looks unrelated and was also found on a different test branch
3. https://tracker.ceph.com/issues/57386
4. https://tracker.ceph.com/issues/52321
5. https://tracker.ceph.com/issues/55142
6. https://tracker.ceph.com/issues/57731
7. https://tracker.ceph.com/issues/57755 -- created a new Tracker; unrelated to PR in this run
8. https://tracker.ceph.com/issues/57756 -- created a new Tracker; unrealted to PR in this run
9. https://tracker.ceph.com/issues/57757 -- created a new Tracker; seems unrelated since there was an instance tracked in Telemetry. Also, it is not from the area of code that was touched in this PR.
10. https://tracker.ceph.com/issues/57546
11. https://tracker.ceph.com/issues/53575
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
3. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selector: 'cd-modal .badge' but never did - Ceph - Mgr - Dashboard
4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
5. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - Cephsqlite
6. Problem: package container-selinux conflicts with udica < 0.2.6-1 provided by udica-0.2.4-1 - Infrastructure
7. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
8. upgrade: notify retry canceled due to unrecoverable error after 1 attempts: unexpected status code 404: https://172.21.15.74:8443//api/prometheus_receiver" - Ceph
9. ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer' - Ceph - RADOS
10. rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+laggy" pgs - Ceph - RADOS
11. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
https://trello.com/c/YRh3jaSk/1636-wip-yuri3-testing-2022-09-21-0921¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-09-21-0921
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-26_23:41:33-rados-wip-yuri3-testing-2022-09-26-1342-distro-default-smithi/
Failures, unrealted:
1. https://tracker.ceph.com/issues/57311
2. https://tracker.ceph.com/issues/55853
3. https://tracker.ceph.com/issues/52321
Details:
1. rook: ensure CRDs are installed first - Ceph - Orchestrator
2. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
https://trello.com/c/6s76bhl0/1605-wip-yuri8-testing-2022-08-22-0646-old-wip-yuri8-testing-2022-08-19-0725-old-wip-yuri8-testing-2022-08-12-0833¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri8-testing-2022-08-22-0646
Failures:
1. https://tracker.ceph.com/issues/57269
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/57270
4. https://tracker.ceph.com/issues/55853
5. https://tracker.ceph.com/issues/45721
6. https://tracker.ceph.com/issues/37660
7. https://tracker.ceph.com/issues/57122
8. https://tracker.ceph.com/issues/57165
9. https://tracker.ceph.com/issues/57303
10 https://tracker.ceph.com/issues/56574
11. https://tracker.ceph.com/issues/55986
12. https://tracker.ceph.com/issues/57332
Details:
1. rook: unable to read URL "https://docs.projectcalico.org/manifests/tigera-operator.yaml" - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
4. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
5. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
6. smithi195:'Failing rest of playbook due to missing NVMe card' - Infrastructure - Sepia
7. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
8. expected valgrind issues and found none - Ceph - RADOS
9. rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7 - Ceph - Orchestrator
10. rados/valgrind-leaks: cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log - Ceph - RADOS
11. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
12. centos 8.stream and rhel 8.6 jobs fail to install ceph-test package due to xmlstarlet dependency - Ceph
https://trello.com/c/0Hp833bV/1613-wip-yuri11-testing-2022-08-24-0658-old-wip-yuri11-testing-2022-08-22-1005¶
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2022-08-22-1005
http://pulpito.front.sepia.ceph.com/lflores-2022-08-25_17:56:48-rados-wip-yuri11-testing-2022-08-24-0658-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/57122
2. https://tracker.ceph.com/issues/55986
3. https://tracker.ceph.com/issues/57270
4. https://tracker.ceph.com/issues/57165
5. https://tracker.ceph.com/issues/57207
6. https://tracker.ceph.com/issues/57268
7. https://tracker.ceph.com/issues/52321
8. https://tracker.ceph.com/issues/56573
9. https://tracker.ceph.com/issues/57163
10. https://tracker.ceph.com/issues/51282
11. https://tracker.ceph.com/issues/57310 -- opened a new Tracker for this; first time this has appeared, but it doesn't seem related to the PR tested in this run.
12. https://tracker.ceph.com/issues/55853
13. https://tracker.ceph.com/issues/57311 -- opened a new Tracker for this; unrelated to PR tested in this run
Details:
1. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
2. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
4. expected valgrind issues and found none - Ceph - RADOS
5. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
6. rook: The CustomResourceDefinition "installations.operator.tigera.io" is invalid - Ceph - Orchestrator
7. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
8. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
9. free(): invalid pointer - Ceph - RADOS
10. pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings - Ceph - Mgr
11. StriperTest: The futex facility returned an unexpected error code - Ceph - RADOS
12. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RADOS
13. rook: ensure CRDs are installed first - Ceph - Orchestrator
https://trello.com/c/bTwMHBB1/1608-wip-yuri5-testing-2022-08-18-0812-old-wip-yuri5-testing-2022-08-16-0859¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri5-testing-2022-08-18-0812
Failures, unrelated:
1. https://tracker.ceph.com/issues/57207
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/57270
4. https://tracker.ceph.com/issues/57122
5. https://tracker.ceph.com/issues/55986
6. https://tracker.ceph.com/issues/57302
Details:
1. AssertionError: Expected to find element: `cd-modal .badge:not(script,style):cy-contains('/^foo$/'), cd-modal .badge[type='submit'][value~='/^foo$/']`, but never found it. - Ceph - Mgr - Dashboard
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-octopus jammy Release' does not have a Release file. - Ceph - Orchestrator
4. test failure: rados:singleton-nomsgr librados_hello_world - Ceph - RADOS
5. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
7. ERROR: test_get_status (tasks.mgr.dashboard.test_cluster.ClusterTest) mgr/dashboard: short_description - Ceph - Mgr - Dashboard
https://trello.com/c/TMFa8xSl/1581-wip-yuri8-testing-2022-07-18-0918-old-wip-yuri8-testing-2022-07-12-1008-old-wip-yuri8-testing-2022-07-11-0903¶
https://pulpito.ceph.com/?branch=wip-yuri8-testing-2022-07-18-0918
Failures, unrelated:
1. https://tracker.ceph.com/issues/56573
2. https://tracker.ceph.com/issues/56574
3. https://tracker.ceph.com/issues/56574
4. https://tracker.ceph.com/issues/55854
5. https://tracker.ceph.com/issues/53422
6. https://tracker.ceph.com/issues/55853
7. https://tracker.ceph.com/issues/52124
Details:
1. test_cephadm.sh: KeyError: 'TYPE' - Ceph - Orchestrator
2. rados/valgrind-leaks: cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log - Ceph - RADOS
3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Rook
4. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
5. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
6. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
7. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
https://trello.com/c/8wxrTRRy/1558-wip-yuri5-testing-2022-06-16-0649¶
https://pulpito.ceph.com/yuriw-2022-06-16_18:33:18-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2022-06-17_13:52:49-rados-wip-yuri5-testing-2022-06-16-0649-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/55853
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/45721
4. https://tracker.ceph.com/issues/55986
5. https://tracker.ceph.com/issues/44595
6. https://tracker.ceph.com/issues/55854
7. https://tracker.ceph.com/issues/56097 -- opened a new Tracker for this; historically, this has occurred previously on a Pacific test branch, so it does not seem related to this PR.
8. https://tracker.ceph.com/issues/56098 -- opened a new Tracker for this; this is the first sighting that I am aware of, but it does not seem related to the tested PR.
Details:
1. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
4. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Cephadm
5. cache tiering: Error: oid 48 copy_from 493 returned error code -2 - Ceph - RADOS
6. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
7. Timeout on `sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats` - Ceph - RADOS
8. api_tier_pp: failure on LibRadosTwoPoolsPP.ManifestRefRead - Ceph - RADOS
https://trello.com/c/eGWSLHXA/1550-wip-yuri8-testing-2022-06-13-0701-old-wip-yuri8-testing-2022-06-07-1522¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/53575
2. https://tracker.ceph.com/issues/55986
3. https://tracker.ceph.com/issues/55853
4. https://tracker.ceph.com/issues/52321
5. https://tracker.ceph.com/issues/55741
6. https://tracker.ceph.com/issues/51835
Details:
1. Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64 - Ceph - RADOS
2. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
3. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
6. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - RADOS
https://trello.com/c/HGpb1F4j/1549-wip-yuri7-testing-2022-06-13-0706-old-wip-yuri7-testing-2022-06-07-1325¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/55986
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/52124
4. https://tracker.ceph.com/issues/52316
5. https://tracker.ceph.com/issues/55322
6. https://tracker.ceph.com/issues/55741
7. https://tracker.ceph.com/issues/56034 --> new Tracker; unrelated to the PRs in this run.
Details:
1. cephadm: Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) - Ceph - Orchestrator
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
4. qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons) - Ceph - RADOS
5. test-restful.sh: mon metadata unable to be retrieved - Ceph - Mgr
6. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
7. qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3() - Ceph - RADOS
https://trello.com/c/SUV9RgLi/1552-wip-yuri3-testing-2022-06-09-1314¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/52321
2. https://tracker.ceph.com/issues/55971
3. https://tracker.ceph.com/issues/55853
4. https://tracker.ceph.com/issues/56000 --> opened a new Tracker for this; unrelated to the PR tested in this run.
5. https://tracker.ceph.com/issues/55741
6. https://tracker.ceph.com/issues/55142
Details:
1. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
2. LibRadosMiscConnectFailure.ConnectFailure test failure - Ceph - CephFS
3. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
4. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` - Ceph - CephFS
5. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
6. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O error - Ceph - cephsqlite
https://trello.com/c/MaWPkMXi/1544-wip-yuri7-testing-2022-06-02-1633¶
Failures, unrelated:
1. https://tracker.ceph.com/issues/55741
2. https://tracker.ceph.com/issues/52321
3. https://tracker.ceph.com/issues/55808
4. https://tracker.ceph.com/issues/55853 --> opened a new Tracker for this; unrelated to the PR tested in this run.
5. https://tracker.ceph.com/issues/55854 --> opened a new Tracker for this; unrelated to the PR tested in this run.
6. https://tracker.ceph.com/issues/55856 --> opened a new Tracker for this; unrelated to the PR tested in this run.
Details:
1. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
2. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
3. task/test_nfs: KeyError: 'events' - Ceph - CephFS
4. test_cls_rgw.sh: failures in 'cls_rgw.index_list' and 'cls_rgw.index_list_delimited` - Ceph - RGW
5. Datetime AssertionError in test_health_history (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
6. ObjectStore/StoreTest.CompressionTest/2 fails when a collection expects an object not to exist, but it does - Ceph - BlueStore
https://trello.com/c/BYYdvJNP/1536-wip-yuri-testing-2022-05-27-0934¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_21:59:17-rados-wip-yuri-testing-2022-05-27-0934-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-28_13:38:20-rados-wip-yuri-testing-2022-05-27-0934-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/51904
2. https://tracker.ceph.com/issues/51835
3. https://tracker.ceph.com/issues/52321
4. https://tracker.ceph.com/issues/55741
5. https://tracker.ceph.com/issues/52124
6. https://tracker.ceph.com/issues/55142
7. https://tracker.ceph.com/issues/55808 -- opened a new Tracker for this issue; it is unrelated to the PRs that were tested.
8. https://tracker.ceph.com/issues/55809 -- opened a new Tracker for this; it is unrelated to the PRs that were tested.
Details:
1. AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
2. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - Mgr
3. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator - Ceph - Orchestrator
4. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
6. [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002.nzpbzu: disk I/O er ror - Ceph - CephSqlite
7. task/test_nfs: KeyError: 'events' - Ceph - CephFS
8. "Leak_IndirectlyLost" valgrind report on mon.c - Ceph - RADOS
https://trello.com/c/JWN6xaC5/1534-wip-yuri7-testing-2022-05-18-1636¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-19_01:43:57-rados-wip-yuri7-testing-2022-05-18-1636-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-19_01:43:57-rados-wip-yuri7-testing-2022-05-18-1636-distro-default-smithi/
Failures, unrelated:
1. https://tracker.ceph.com/issues/52124
2. https://tracker.ceph.com/issues/55741
3. https://tracker.ceph.com/issues/51835
4. https://tracker.ceph.com/issues/52321
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal .custom-control-label` when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
3. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - RADOS
4. qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
https://trello.com/c/NXVtDT7z/1505-wip-yuri2-testing-2022-04-22-0500-old-yuri2-testing-2022-04-18-1150¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-22_13:56:48-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-23_16:21:59-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/lflores-2022-04-25_16:23:25-rados-wip-yuri2-testing-2022-04-22-0500-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/55419
https://tracker.ceph.com/issues/55429
https://tracker.ceph.com/issues/54458
Details:
1. cephtool/test.sh: failure on blocklist testing - Ceph - RADOS
2. mgr/dashboard: AttributeError: 'NoneType' object has no attribute 'group' - Ceph - Mgr - Dashboard
3. osd-scrub-snaps.sh: TEST_scrub_snaps failed due to malformed log message - Ceph - RADOS
https://trello.com/c/s7NuYSTa/1509-wip-yuri2-testing-2022-04-13-0703¶
Failures, unrelated:
https://tracker.ceph.com/issues/53789
https://tracker.ceph.com/issues/55322
https://tracker.ceph.com/issues/55323
Details:
1. CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
2. test-restful.sh: mon metadata unable to be retrieved - Ceph - RADOS
3. cephadm/test_dashboard_e2e.sh: cypress "500: Internal Server Error" caused by missing password - Ceph - Mgr - Dashboard
https://trello.com/c/1yaPNXSG/1507-wip-yuri7-testing-2022-04-11-1139¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-11_23:40:29-rados-wip-yuri7-testing-2022-04-11-1139-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-12_15:25:49-rados-wip-yuri7-testing-2022-04-11-1139-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/55295
https://tracker.ceph.com/issues/54372
Details:
1. Dead job caused by "AttributeError: 'NoneType' object has no attribute '_fields'" on smithi055 - Intrastructure - Sepia
2. No module named 'tasks' - Infrastructure
https://trello.com/c/nJwB8bHf/1497-wip-yuri3-testing-2022-04-01-0659-old-wip-yuri3-testing-2022-03-31-1158¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-01_17:44:32-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-02_01:57:28-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-04-02_14:56:39-rados-wip-yuri3-testing-2022-04-01-0659-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/53422
https://tracker.ceph.com/issues/47838
https://tracker.ceph.com/issues/47025
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/55178
Details:
1. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
2. mon/test_mon_osdmap_prune.sh: first_pinned != trim_to - Ceph - RADOS
3. rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed - Ceph - RADOS
4. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
5. osd-scrub-test.sh: TEST_scrub_extended_sleep times out - Ceph - RADOS
https://trello.com/c/QxTQADSe/1487-wip-yuri-testing-2022-03-24-0726-old-wip-yuri-testing-2022-03-23-1337¶
Failures, unrelated:
https://tracker.ceph.com/issues/54990
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/51904
https://trello.com/c/p6Ew1Pq4/1481-wip-yuri7-testing-2022-03-21-1529¶
Failures, unrelated:
https://tracker.ceph.com/issues/53680
https://tracker.ceph.com/issues/52320
https://tracker.ceph.com/issues/52657
https://trello.com/c/v331Ll3Y/1478-wip-yuri6-testing-2022-03-18-1104-old-wip-yuri6-testing-2022-03-17-1547¶
Failures, unrelated:
https://tracker.ceph.com/issues/54990
https://tracker.ceph.com/issues/54329
https://tracker.ceph.com/issues/53680
https://tracker.ceph.com/issues/49888
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/55001
https://tracker.ceph.com/issues/52320
https://tracker.ceph.com/issues/55009
https://trello.com/c/hrDifkIO/1471-wip-yuri3-testing-2022-03-09-1350¶
Failures, unrelated:
https://tracker.ceph.com/issues/54529
https://tracker.ceph.com/issues/54307
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/53680
Details:
1. mon/mon-bind.sh: Failure due to cores found
2. test_cls_rgw.sh: 'index_list_delimited' test times out
3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
4. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
https://trello.com/c/6g22dJPJ/1469-wip-yuri5-testing-2022-03-07-0958¶
Failures, unrelated:
https://tracker.ceph.com/issues/48873
https://tracker.ceph.com/issues/53680
https://trello.com/c/CcFET7cb/1470-wip-yuri-testing-2022-03-07-0958¶
Failures, unrelated:
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/53680
https://tracker.ceph.com/issues/51076
Details:
1. cephadm: RuntimeError: uid/gid not found - Ceph
2. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
3. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
https://trello.com/c/IclLwlHA/1467-wip-yuri4-testing-2022-03-01-1206¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-01_22:42:19-rados-wip-yuri4-testing-2022-03-01-1206-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-02_15:47:04-rados-wip-yuri4-testing-2022-03-01-1206-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/52320
https://tracker.ceph.com/issues/53680
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. unable to get monitor info from DNS SRV with service name: ceph-mon - Ceph - Orchestrator
3. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
https://trello.com/c/81yzd6MX/1434-wip-yuri6-testing-2022-02-14-1456-old-wip-yuri6-testing-2022-01-26-1547¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-02-14-1456
Failures, unrelated:
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/54438
Details:
1. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
2. "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
3. test/objectstore/store_test.cc: FAILED ceph_assert(bl_eq(state->contents[noid].data, r2)) in function 'virtual void SyntheticWorkloadState::C_SyntheticOnClone::finish(int)' - Ceph - RADOS
https://trello.com/c/9GAwJxub/1450-wip-yuri4-testing-2022-02-18-0800-old-wip-yuri4-testing-2022-02-14-1512¶
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri4-testing-2022-02-18-0800
Failures, unrelated:
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/53422
https://tracker.ceph.com/issues/51627
https://tracker.ceph.com/issues/53680
https://tracker.ceph.com/issues/52320
https://tracker.ceph.com/issues/52124
Details:
1. CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
2. tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
3. FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
4. ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
5. unable to get monitor info from DNS SRV with service name: ceph-mon - Ceph - Orchstrator
6. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
https://trello.com/c/ba4bDdJQ/1457-wip-yuri3-testing-2022-02-17-1256¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-17_22:49:55-rados-wip-yuri3-testing-2022-02-17-1256-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-21_20:37:48-rados-wip-yuri3-testing-2022-02-17-1256-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/54086
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/54360
Details:
Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
Bug_#54086: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
Bug_#54360: Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait - Ceph
https://trello.com/c/qSYwEWdA/1453-wip-yuri11-testing-2022-02-15-1643¶
Failures, unrelated:
https://tracker.ceph.com/issues/54307
https://tracker.ceph.com/issues/54306
https://tracker.ceph.com/issues/52124
Details:
test_cls_rgw.sh: 'index_list_delimited' test times out - Ceph - RGW
tasks.cephfs.test_nfs.TestNFS.test_create_multiple_exports: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
https://trello.com/c/ubP4w0OV/1438-wip-yuri5-testing-2022-02-09-1322-pacific-old-wip-yuri5-testing-2022-02-08-0733-pacific-old-wip-yuri5-testing-2022-02-02-0936-pa¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-09_22:52:18-rados-wip-yuri5-testing-2022-02-09-1322-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-02-08_17:00:23-rados-wip-yuri5-testing-2022-02-08-0733-pacific-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/51234
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/48997
https://tracker.ceph.com/issues/45702
https://tracker.ceph.com/issues/50222
https://tracker.ceph.com/issues/51904
Details:
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
Bug_#51234: LibRadosService.StatusFormat failed, Expected: (0) != (retry), actual: 0 vs 0 - Ceph - RADOS
Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
BUg_#48997: rados/singleton/all/recovery-preemption: defer backfill|defer recovery not found in logs - Ceph - RADOS
Bug_#45702: PGLog::read_log_and_missing: ceph_assert(miter missing.get_items().end() || (miter->second.need i->version && miter->second.have == eversion_t())) - Ceph - RADOS
Bug_#50222: osd: 5.2s0 deep-scrub : stat mismatch - Ceph - RADOS
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
https://trello.com/c/djEk6FIL/1441-wip-yuri2-testing-2022-02-04-1646-pacific-old-wip-yuri2-testing-2022-02-04-1646-pacific-old-wip-yuri2-testing-2022-02-04-1559-pa¶
Failures:
https://tracker.ceph.com/issues/54086
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/51904
https://tracker.ceph.com/issues/54210
https://tracker.ceph.com/issues/54211
https://tracker.ceph.com/issues/54212
Details:
Bug_#54086: pacific: tasks/dashboard: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
Bug_#54218: mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get a pg_num | grep 256'" - Ceph - RADOS
Bug_#54211: pacific: test_devicehealth failure due to RADOS object not found (error opening pool 'device_health_metrics') - Ceph - Mgr
Bug_#54212: pacific: test_pool_configuration fails due to "AssertionError: 400 != 200" - Ceph - Mgr
yuriw-2022-01-27_15:09:25-rados-wip-yuri6-testing-2022-01-26-1547-distro-default-smithi¶
https://trello.com/c/81yzd6MX/1434-wip-yuri6-testing-2022-01-26-1547
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-27_15:09:25-rados-wip-yuri6-testing-2022-01-26-1547-distro-default-smithi/
Failures:
https://tracker.ceph.com/issues/53767
https://tracker.ceph.com/issues/50192
Details:
Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
Bug_#50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
yuriw-2022-01-27_14:57:16-rados-wip-yuri-testing-2022-01-26-1810-pacific-distro-default-smithi¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-27_14:57:16-rados-wip-yuri-testing-2022-01-26-1810-pacific-distro-default-smithi/
https://trello.com/c/qoIF7T3R/1416-wip-yuri-testing-2022-01-26-1810-pacific-old-wip-yuri-testing-2022-01-07-0928-pacific
Failures, unrelated:
https://tracker.ceph.com/issues/54071
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/45318
https://tracker.ceph.com/issues/54086
https://tracker.ceph.com/issues/51076
Details:
Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#45318: octopus: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running tasks/mon_clock_no_skews.yaml - Ceph - RADOS
Bug_#54086: pacific: tasks/dashboard: Permission denied when trying to unlink and open /var/log/ntpstats/... - Tools - Teuthology
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
yuriw-2022-01-24_17:43:02-rados-wip-yuri2-testing-2022-01-21-0949-pacific-distro-default-smithi¶
Failures:
https://tracker.ceph.com/issues/53857
https://tracker.ceph.com/issues/53501
https://tracker.ceph.com/issues/54071
Details:
Bug_#53857: qa: fs:upgrade test fails mds count check - Ceph - CephFS
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
Bug_#54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>) - Ceph - Orchestrator
yuriw-2022-01-24_18:01:47-rados-wip-yuri10-testing-2022-01-24-0810-octopus-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/45318
Details:
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#45318: octopus: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running tasks/mon_clock_no_skews.yaml
yuriw-2022-01-21_15:22:24-rados-wip-yuri7-testing-2022-01-20-1609-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/53843
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/53807
Details:
Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
Bug_#53827: cephadm exited with error code when creating osd: Input/Output error. Faulty NVME? - Infrastructure - Sepia
Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
yuriw-2022-01-15_05:47:18-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-15_05:47:18-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-17_17:14:22-rados-wip-yuri8-testing-2022-01-14-1551-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/51076
https://tracker.ceph.com/issues/53807
https://tracker.ceph.com/issues/53842
Details:
Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53827: cephadm exited with error code when creating osd. - Ceph - Orchestrator
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
Bug_#53842: cephadm/mds_upgrade_sequence: KeyError: 'en***'
yuriw-2022-01-17_17:05:17-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi¶
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-14_23:22:09-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-01-17_17:05:17-rados-wip-yuri6-testing-2022-01-14-1207-distro-default-smithi/
Failures, unrelated:
https://tracker.ceph.com/issues/53843
https://tracker.ceph.com/issues/53872
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/53807
Details:
Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
Bug_#53872: Errors detected in generated GRUB config file
Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
yuriw-2022-01-13_14:57:55-rados-wip-yuri5-testing-2022-01-12-1534-distro-default-smithi¶
Failures:
https://tracker.ceph.com/issues/45721
https://tracker.ceph.com/issues/53843
https://tracker.ceph.com/issues/49483
https://tracker.ceph.com/issues/50280
https://tracker.ceph.com/issues/53807
https://tracker.ceph.com/issues/51904
https://tracker.ceph.com/issues/53680
Details:
Bug_#45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_rados.TestWatchNotify.test - Ceph - RADOS
Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
Bug_#49483: CommandFailedError: Command failed on smithi104 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/... - Ceph - Orchestrator
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
Bug_#53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
yuriw-2022-01-12_21:37:22-rados-wip-yuri6-testing-2022-01-12-1131-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/53843
https://tracker.ceph.com/issues/51904
https://tracker.ceph.com/issues/53807
https://tracker.ceph.com/issues/53767
https://tracker.ceph.com/issues/51307
Details:
Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
Bug_#51307: LibRadosWatchNotify.Watch2Delete fails - Ceph - RADOS
yuriw-2022-01-11_19:17:55-rados-wip-yuri5-testing-2022-01-11-0843-distro-default-smithi¶
Failures:
https://tracker.ceph.com/issues/53843
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/53827
https://tracker.ceph.com/issues/53855
https://tracker.ceph.com/issues/53807
https://tracker.ceph.com/issues/51076
Details:
Bug_#53843: mgr/dashboard: Error - yargs parser supports a minimum Node.js version of 12. - Ceph - Mgr - Dashboard
Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
Bug_#53827: cephadm exited with error code when creating osd. - Ceph - Orchestrator
Bug_#53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount - Ceph - RADOS
Bug_#53424: Ceph - Orchestrator: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
Bug_#53807: Dead jobs in rados/cephadm/smoke-roleless{...} - Ceph - Orchestrator
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
yuriw-2022-01-06_15:50:38-rados-wip-yuri8-testing-2022-01-05-1411-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/53789
https://tracker.ceph.com/issues/53422
https://tracker.ceph.com/issues/50192
https://tracker.ceph.com/issues/53807
https://tracker.ceph.com/issues/53424
Details:
Bug_#53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
Bug_#53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
Bug_#50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soid) || (it_objects != recovery_state.get_pg_log().get_log().objects.end() && it_objects->second->op pg_log_entry_t::LOST_REVERT)) - Ceph - RADOS
Bug_#53807: Hidden ansible output and offline filesystem failures lead to dead jobs - Ceph - CephFS
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
lflores-2022-01-05_19:04:35-rados-wip-lflores-mgr-rocksdb-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/53781
https://tracker.ceph.com/issues/53499
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/53789
https://tracker.ceph.com/issues/53424
https://tracker.ceph.com/issues/49483
https://tracker.ceph.com/issues/53842
Details:
Bug_#53781: cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal` when testing on orchestrator/03-inventory.e2e-spec.ts - Ceph - Mgr - Dashboard
Bug_#53499: testdashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
Bug_#49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
Bug_#53789: CommandFailedError (rados/test_python.sh): "RADOS object not found" causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph - RADOS
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
Bug_#49483: CommandFailedError: Command failed on smithi104 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/... - Ceph - Orchestrator
Bug_#53842: cephadm/mds_upgrade_sequence: KeyError: 'en***' - Ceph - Orchestrator
yuriw-2022-01-04_21:52:15-rados-wip-yuri7-testing-2022-01-04-1159-distro-default-smithi¶
Failures, unrelated:
https://tracker.ceph.com/issues/53723
https://tracker.ceph.com/issues/38357
https://tracker.ceph.com/issues/53294
https://tracker.ceph.com/issues/53424
https://tracker.ceph.com/issues/53680
https://tracker.ceph.com/issues/53782
https://tracker.ceph.com/issues/53781
Details:
Bug_#53723: Cephadm agent fails to report and causes a health timeout - Ceph - Orchestrator
Bug_#38357: ClsLock.TestExclusiveEphemeralStealEphemeral failed - Ceph - RADOS
Bug_#53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush - Ceph - RADOS
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
Bug_#53680: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds - Ceph - Orchestrator
Bug_#53782: site-packages/paramiko/transport.py: Invalid packet blocking causes unexpected end of data - Infrastructure
Bug_#53781: cephadm/test_dashboard_e2e.sh: Unable to find element `cd-modal` when testing on orchestrator/03-inventory.e2e-spec.ts - Ceph - Mgr - Dashboard
yuriw-2021-12-23_16:50:03-rados-wip-yuri6-testing-2021-12-22-1410-distro-default-smithi¶
Failures related to #43865:
6582615 -- Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:*SyntheticMatrixC* --gtest_catch_exceptions=0\''
Failures, unrelated:
https://tracker.ceph.com/issues/53499
https://tracker.ceph.com/issues/52124
https://tracker.ceph.com/issues/52652
https://tracker.ceph.com/issues/53422
https://tracker.ceph.com/issues/51945
https://tracker.ceph.com/issues/53424
https://tracker.ceph.com/issues/53394
https://tracker.ceph.com/issues/53766
https://tracker.ceph.com/issues/53767
Details:
Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
Bug_#53422: tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existing_fsname: AssertionError: NFS Ganesha cluster deployment failed - Ceph - Orchestrator
Bug_#51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0 - Ceph - RADOS
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
Bug_#53394: cephadm: can infer config from mon from different cluster causing file not found error - Ceph - Orchestrator
Bug_#53766: ceph orch ls: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator
Bug_#53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout - Ceph - RADOS
yuriw-2021-12-22_22:11:35-rados-wip-yuri3-testing-2021-12-22-1047-distro-default-smithi¶
6580187, 6580436 -- https://tracker.ceph.com/issues/52124
Command failed (workunit test rados/test.sh) on smithi037 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1121b3c9661a85cfbc852d654ea7d22c1d1be751 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'
6580226, 6580440 -- https://tracker.ceph.com/issues/38455
Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest)
6580242-- https://tracker.ceph.com/issues/53499
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi016 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1121b3c9661a85cfbc852d654ea7d22c1d1be751 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
6580330 -- https://tracker.ceph.com/issues/53681
Command failed on smithi185 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1121b3c9661a85cfbc852d654ea7d22c1d1be751 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2c10ca7c-63a8-11ec-8c31-001a4aab830c - ceph mon dump -f json'
6580439 -- https://tracker.ceph.com/issues/53723
timeout expired in wait_until_healthy
6580078, 6580296 -- https://tracker.ceph.com/issues/53424
hit max job timeout
6580192
hit max job timeout
Failures, unrelated:
6580187, 6580436 -- https://tracker.ceph.com/issues/52124
6580226, 6580440 -- https://tracker.ceph.com/issues/38455
6580242-- https://tracker.ceph.com/issues/53499
6580330 -- https://tracker.ceph.com/issues/53681
6580439 -- https://tracker.ceph.com/issues/53723
6580078, 6580296 -- https://tracker.ceph.com/issues/53424
6580192 -- https://tracker.ceph.com/issues/51076
Details:
Bug_#52124: Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
Bug_#38455: Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest): RuntimeError: Synthetic exception in serve - Ceph - Mgr
Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
Bug_#53681: Failed to extract uid/gid for path /var/lib/ceph - Ceph - Orchestrator
Bug_#53723: Cephadm agent fails to report and causes a health timeout - Ceph - Orchestrator
Bug_#53424: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/ - Ceph - Orchestrator
Bug_#51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. - Ceph - RADOS
yuriw-2021-12-21_15:47:03-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi¶
Failures, unrelated:
6576068 -- https://tracker.ceph.com/issues/53499
6576071 -- https://tracker.ceph.com/issues/53615 -- timeout after healthy
Details:
Bug_#53499: Ceph - Mgr - Dashboard: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed.
Bug_#53448: Ceph - Orchestrator: cephadm: agent failures double reported by two health checks
yuriw-2021-12-17_22:45:37-rados-wip-yuri10-testing-2021-12-17-1119-distro-default-smithi¶
YES
6569383 -- ceph_objectstore_tool test
"2021-12-18T01:25:23.848389+0000 osd.5 (osd.5) 1 : cluster [ERR] map e73 had wrong heartbeat front addr ([v2:0.0.0.0:6844/122637,v1:0.0.0.0:6845/122637] != my [v2:172.21.15.2:6844/122637,v1:172.21.15.2:6845/122637])" in cluster log
YES
6569399: -- https://tracker.ceph.com/issues/53681
Failed to extract uid/gid
2021-12-18T01:38:21.360 INFO:teuthology.orchestra.run.smithi049.stderr:ERROR: Failed to extract uid/gid for path /var/lib/ceph: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint stat --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:91fdab49fed87aa0a3dbbceccc27e84ab4f80130 -e NODE_NAME=smithi049 -e CEPH_USE_RANDOM_NONCE=1 quay.ceph.io/ceph-ci/ceph:91fdab49fed87aa0a3dbbceccc27e84ab4f80130 -c %u %g /var/lib/ceph: Error: OCI runtime error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: Unit libpod-2b9797e9757bd79dbc4b77f0751f4bf7a30b0618828534759fcebba7819e72f7.scope not found.
YES
6569450 -- https://tracker.ceph.com/issues/53499
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=91fdab49fed87aa0a3dbbceccc27e84ab4f80130 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
YES
6569647 -- https://tracker.ceph.com/issues/53615
2021-12-18T04:14:40.526 INFO:teuthology.orchestra.run.smithi190.stdout:{"status":"HEALTH_WARN","checks":{"CEPHADM_AGENT_DOWN":{"severity":"HEALTH_WARN","summary":{"message":"1 Cephadm Agent(s) are not reporting. Hosts may be offline","count":1},"muted":false},"CEPHADM_FAILED_DAEMON":{"severity":"HEALTH_WARN","summary":{"message":"1 failed cephadm daemon(s)","count":1},"muted":false}},"mutes":[]}
2021-12-18T04:14:40.929 INFO:journalctl@ceph.mon.a.smithi190.stdout:Dec 18 04:14:40 smithi190 bash14624: cluster 2021-12-18T04:14:39.122970+0000 mgr.a (mgr.14152) 343 : cluster [DBG] pgmap v329: 1 pgs: 1 active+clean; 577 KiB data, 18 MiB used, 268 GiB / 268 GiB avail
2021-12-18T04:14:40.930 INFO:journalctl@ceph.mon.a.smithi190.stdout:Dec 18 04:14:40 smithi190 bash14624: audit 2021-12-18T04:14:40.524789+0000 mon.a (mon.0) 349 : audit [DBG] from='client.? 172.21.15.190:0/570196741' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch
2021-12-18T04:14:41.209 INFO:tasks.cephadm:Teardown begin
2021-12-18T04:14:41.209 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/contextutil.py", line 33, in nested
yield vars
File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/cephadm.py", line 1548, in task
healthy(ctx=ctx, config=config)
File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/ceph.py", line 1469, in healthy
manager.wait_until_healthy(timeout=300)
File "/home/teuthworker/src/github.com_ceph_ceph-c_91fdab49fed87aa0a3dbbceccc27e84ab4f80130/qa/tasks/ceph_manager.py", line 3146, in wait_until_healthy
'timeout expired in wait_until_healthy'
AssertionError: timeout expired in wait_until_healthy
YES
6569286 -- https://tracker.ceph.com/issues/53424
hit max job timeout
cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)
YES
6569344 -- https://tracker.ceph.com/issues/53680
ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds
YES
6569400 -- https://tracker.ceph.com/issues/51847
AssertionError: wait_for_recovery: failed before timeout expired
YES
6569567
[ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Failures to watch:
6569383 -- ceph_objectstore_tool test
Failures unrelated:
6569399: -- https://tracker.ceph.com/issues/53681
6569450 -- https://tracker.ceph.com/issues/53499
6569647 -- might be related to https://tracker.ceph.com/issues/53448
6569286 -- https://tracker.ceph.com/issues/53424
6569344 -- https://tracker.ceph.com/issues/53680
6569400 -- might be related to https://tracker.ceph.com/issues/51847
6569567 -- Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Details:
Bug_#53681: Ceph - Orchestrator: Failed to extract uid/gid for path /var/lib/ceph
Bug_#53499: Ceph - Mgr - Dashboard: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed.
Bug_#53448: Ceph - Orchestrator: cephadm: agent failures double reported by two health checks
Bug_#53424: Ceph - Orchestrator: CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
Bug_#53680: Ceph - Orchestrator: ERROR:tasks.rook:'waiting for service removal' reached maximum tries (90) after waiting for 900 seconds
Bug_#51847: Ceph - RADOS: A PG in "incomplete" state may end up in a backfill loop.
lflores-2021-12-19_03:36:08-rados-wip-bluestore-zero-detection-distro-default-smithi¶
http://pulpito.front.sepia.ceph.com/lflores-2021-12-19_03:36:08-rados-wip-bluestore-zero-detection-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/lflores-2021-12-19_18:26:29-rados-wip-bluestore-zero-detection-distro-default-smithi/
Failures, unrelated:
6572638 -- timeout expired in wait_until_healthy -- https://tracker.ceph.com/issues/53448
6572650, 6572644 -- failed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-12-18_18:14:24-rados-wip-yuriw-master-12.18.21-distro-default-smithi/6569986/
6572643, 6572648 -- https://tracker.ceph.com/issues/53499
Details:
Bug_#53448: cephadm: agent failures double reported by two health checks - Ceph - Orchestrator
Bug_#53499: test_dashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi¶
Failures:
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556651/ -- src/osd/OSDMap.cc: 5835: FAILED ceph_assert(num_down_in_osds <= num_in_osds)
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556696/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556710/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556544/ -- Invalid argument Failed to validate Drive Group: OSD spec needs a `placement` key.
http://pulpito.front.sepia.ceph.com/lflores-2021-12-10_05:30:11-rados-wip-primary-balancer-distro-default-smithi/6556501/ -- osd.3 420 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.5218.0:7856 216.13 216:c84c9e4f:test-rados-api-smithi012-38462-88::foo:head [tier-flush] snapc 0=[] ondisk+read+ignore_cache+known_if_redirected+supports_pool_eio e419)
yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi¶
Failures, unrelated:
6553716, 6553740, 6553788, 6553822, 6553844, 6553876, 6553930, 6553953, 6553982, 6554000, 6554035, 6554063, 6554085 -- https://tracker.ceph.com/issues/53487
6553768, 6553897 -- failed in recent master baseline: http://pulpito.front.sepia.ceph.com/yuriw-2021-12-07_00:28:11-rados-wip-master_12.6.21-distro-default-smithi/6549263/
6553774 -- https://tracker.ceph.com/issues/50280
6553780, 6553993 -- https://tracker.ceph.com/issues/53499
6553781, 6553994 -- https://tracker.ceph.com/issues/53501
6554077 -- https://tracker.ceph.com/issues/51904
6553724 -- https://tracker.ceph.com/issues/52657
6553853 -- infrastructure failure
Details:
Bug_#53487: qa: mount error 22 = Invalid argument - Ceph - CephFS
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53499: testdashboard_e2e.sh Failure: orchestrator/02-hosts-inventory.e2e failed. - Ceph - Mgr - Dashboard
Bug_#53501: Exception when running 'rook' task. - Ceph - Orchestrator
Bug_#51904: AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS
Bug_#52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)' - Ceph - RADOS
yuriw-2021-11-20_18:00:22-rados-wip-yuri6-testing-2021-11-20-0807-distro-basic-smithi¶
6516255, 6516370, 6516487, 6516611, 6516729, 6516851, 6516967 -- not this exact Tracker, but similar: https://tracker.ceph.com/issues/46398 -- Command failed on smithi117 with status 5: 'sudo systemctl stop ceph-5f34df08-4a33-11ec-8c2c-001a4aab830c@mon.a'
6516264, 6516643 -- https://tracker.ceph.com/issues/50280 -- Command failed on smithi124 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9eae1762-4a33-11ec-8c2c-001a4aab830c - ceph mon dump -f json'
6516453, 6516879 -- https://tracker.ceph.com/issues/53287 -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)
6516751 -- seen in the recent master baseline: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-25_15:23:56-rados-wip-yuriw-master-11.24.21-distro-basic-smithi/6526537/ -- Command failed (workunit test rados/test.sh) on smithi017 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'
6516753 -- https://tracker.ceph.com/issues/51945-- Command failed (workunit test mon/caps.sh) on smithi098 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/caps.sh'
6516755 -- https://tracker.ceph.com/issues/53345 -- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
6516787, 6516362 -- https://tracker.ceph.com/issues/53353 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi123 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
6516903 -- Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi179 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=aaf7014b1112f4ac5ff8a7d19040937c76cc3c26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'
"nofallback" failure: https://tracker.ceph.com/issues/53487
New "e2e" failure: https://tracker.ceph.com/issues/53499
sage-2021-11-29_14:24:46-rados-master-distro-basic-smithi¶
https://pulpito.ceph.com/sage-2021-11-29_14:24:46-rados-master-distro-basic-smithi/
Failures tracked by:
[6533605] -- https://tracker.ceph.com/issues/50280 -- Command failed on smithi019 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 18561d74-5125-11ec-8c2d-001a4aab830c - ceph osd crush tunables default'
[6533603, 6533628] -- https://tracker.ceph.com/issues/53287 -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)
[6533608, 6533616, 6533622, 6533627, 6533637, 6533642] -- Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:69b04de2932d00fc7fcaa14c718595ec42f18e67 pull'
[6533614] -- https://tracker.ceph.com/issues/53345 -- Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)
[6533623, 6533641] -- https://tracker.ceph.com/issues/53353 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi067 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
[6533606] -- https://tracker.ceph.com/issues/50106 -- Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi082 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=76db81421d171ab44a5bc7e9572f870733e5c8e3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'
Details:
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53287: test_standby (tasks.mgr.test_prometheus.TestPrometheus) fails - Ceph - Mgr
Bug_#53345: Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI) - Ceph - Orchestrator
Bug_#53353: mgr/dashboard: orchestrator/03-inventory.e2e-spec.ts failure - Ceph - Mgr - Dashboard
Bug_#50106: scrub/osd-scrub-repair.sh: corrupt_scrub_erasure: return 1 - Ceph - RADOS
yuriw-2021-11-16_13:07:14-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi¶
Failures unrelated:
6506499, 6506504 -- Command failed on smithi059 with status 5: 'sudo systemctl stop ceph-2a24c9ac-46f2-11ec-8c2c-001a4aab830c@mon.a' -- tracked by https://tracker.ceph.com/issues/46035
Details:
Bug_#44824: cephadm: adding osd device is not idempotent - Ceph - Orchestrator
Bug_#52890: lsblk: vg_nvme/lv_4: not a block device - Tools - Teuthology
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#53123: mgr/dashboard: ModuleNotFoundError: No module named 'tasks.mgr.dashboard.test_ganesha' - Ceph - Mgr - Dashboard
Bug_#8048: Teuthology error: mgr/prometheus fails with NewConnectionError - Ceph - Mgr
Bug_#46035: Report the correct error when quay fails - Tools - Teuthology
https://trello.com/c/acNvAaS3/1380-wip-yuri4-testing-2021-11-15-1306¶
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-16_00:15:25-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-16_13:07:14-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
Master baseline; Nov. 16th: http://pulpito.front.sepia.ceph.com/?branch=wip-yuriw-master-11.12.21
yuriw-2021-11-16_00:15:25-rados-wip-yuri4-testing-2021-11-15-1306-distro-basic-smithi
Failures to watch:
[6505076] -- Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7153db66-4692-11ec-8c2c-001a4aab830c - ceph orch daemon add osd smithi073:vg_nvme/lv_3' -- could be related to https://tracker.ceph.com/issues/44824 or https://tracker.ceph.com/issues/52890
[6505401, 6505416] -- HTTPSConnectionPool(host='shaman.ceph.com', port=443): Max retries exceeded with url: /api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F20.04%2Fx86_64&sha1=f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fac788801d0>: Failed to establish a new connection: [Errno 110] Connection timed out',)) -- seen in a fairly recent master run according to Sentry
Failures unrelated:
[6505055, 6505202] -- Command failed on smithi183 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b5fd1072-4690-11ec-8c2c-001a4aab830c - ceph mon dump f json' - tracked by https://tracker.ceph.com/issues/50280
[6505067, 6505272, 6506500, 6506510] -- Test failure: test_ganesha (unittest.loader._FailedTest) -- tracked by https://tracker.ceph.com/issues/53123
6505172, 6505376, 6506503, 6506514 -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi063 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f188280b31ba4dafe6a9cbafd87bae7a4fc52a64 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in recent master baseline run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-13_15:31:06-rados-wip-yuriw-master-11.12.21-distro-basic-smithi/6501542/
[6505216, 6505420, 6506507, 6506518] -- Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus) -- could be related to this https://tracker.ceph.com/issues/38048; seen in recent master baseline run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-13_15:31:06-rados-wip-yuriw-master-11.12.21-distro-basic-smithi/6501586/
wip-yuri7-testing-2021-11-01-1748¶
Failures related:
6481640 -- Command failed (workunit test rados/test_dedup_tool.sh) on smithi090 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' -- related to #43481; also seen in a previous run: seen in a previous run: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_17:42:41-rados-wip-yuri7-testing-2021-10-28-1307-distro-basic-smithi/6467499/
Failures unrelated, tracked in:
[6481465, 6481610, 6481724, 6481690] -- Command failed on smithi188 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3718dce-3da4-11ec-8c28-001a4aab830c - ceph mon dump f json' - tracked in https://tracker.ceph.com/issues/50280; also seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_23:44:51-rados-wip-yuri-master-10.29.21-distro-basic-smithi/6468420/
6481477 -- Test failure: test_ganesha (unittest.loader._FailedTest) -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
[6481503, 6481528] -- Command failed (workunit test rados/test.sh) on smithi043 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' -- tracked in https://tracker.ceph.com/issues/40926; seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
6481538 -- Command failed on smithi063 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29bac0d4-3f81-11ec-8c28-001a4aab830c - ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi063:172.21.15.63=smithi063'" -- tracked by https://tracker.ceph.com/issues/50280
6481583 -- reached maximum tries (800) after waiting for 4800 seconds -- tracked by https://tracker.ceph.com/issues/51576
6481608 -- Command failed on smithi174 with status 1: 'sudo fuser v /var/lib/dpkg/lock-frontend' - seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-07_14:27:05-upgrade-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6490149/
6481624 -- Found coredumps on ubuntu@smithi080.front.sepia.ceph.com -- tracked by https://tracker.ceph.com/issues/53206; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
6481677 -- Test failure: test_access_permissions (tasks.mgr.dashboard.test_cephfs.CephfsTest) -- tracked by https://tracker.ceph.com/issues/41949
6481727 -- Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) -- tracked by https://tracker.ceph.com/issues/52652
6481755 -- Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 704120ac-3f9f-11ec-8c28-001a4aab830c - ceph osd stat f json' - seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488543/
[6481580, 6481777] -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488604/
[6481449, 6481541, 6481589, 6481639, 6481686, 6481741, 6481785] hit max job timeout -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
[6481819] -- similar to https://tracker.ceph.com/issues/46063
Details:
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#40926: "Command failed (workunit test rados/test.sh)" in rados - Ceph
Bug_#51576: qa/tasks/radosbench.py times out - Ceph - RADOS
Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology
Bug_#41949: test_access_permissions fails in tasks.mgr.dashboard.test_cephfs.CephfsTest - Ceph - Mgr - Dashboard
Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
Bug_#46063: Could not find the requested service nrpe - Tools - Teuthology
wip-yuri7-testing-2021-11-01-1748¶
Failures related:
6481640 -- Command failed (workunit test rados/test_dedup_tool.sh) on smithi090 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' -- related to #43481; also seen in a previous run: seen in a previous run: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_17:42:41-rados-wip-yuri7-testing-2021-10-28-1307-distro-basic-smithi/6467499/
Failures unrelated, tracked in:
[6481465, 6481610, 6481724, 6481690] -- Command failed on smithi188 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3718dce-3da4-11ec-8c28-001a4aab830c - ceph mon dump f json' - tracked in https://tracker.ceph.com/issues/50280; also seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-10-29_23:44:51-rados-wip-yuri-master-10.29.21-distro-basic-smithi/6468420/
6481477 -- Test failure: test_ganesha (unittest.loader._FailedTest) -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
[6481503, 6481528] -- Command failed (workunit test rados/test.sh) on smithi043 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' -- tracked in https://tracker.ceph.com/issues/40926; seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
6481538 -- Command failed on smithi063 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29bac0d4-3f81-11ec-8c28-001a4aab830c - ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi063:172.21.15.63=smithi063'" -- tracked by https://tracker.ceph.com/issues/50280
6481583 -- reached maximum tries (800) after waiting for 4800 seconds -- tracked by https://tracker.ceph.com/issues/51576
6481608 -- Command failed on smithi174 with status 1: 'sudo fuser v /var/lib/dpkg/lock-frontend' - seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-07_14:27:05-upgrade-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6490149/
6481624 -- Found coredumps on ubuntu@smithi080.front.sepia.ceph.com -- tracked by https://tracker.ceph.com/issues/53206; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
6481677 -- Test failure: test_access_permissions (tasks.mgr.dashboard.test_cephfs.CephfsTest) -- tracked by https://tracker.ceph.com/issues/41949
6481727 -- Test failure: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) -- tracked by https://tracker.ceph.com/issues/52652
6481755 -- Command failed on smithi096 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c073f2a96ead6e06491abf2e0a39845606181f34 shell c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 704120ac-3f9f-11ec-8c28-001a4aab830c - ceph osd stat f json' - seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488543/
[6481580, 6481777] -- Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi043 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c073f2a96ead6e06491abf2e0a39845606181f34 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488604/
[6481449, 6481541, 6481589, 6481639, 6481686, 6481741, 6481785] hit max job timeout -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
[6481819] -- similar to https://tracker.ceph.com/issues/46063
Details:
Bug_#50280: cephadm: RuntimeError: uid/gid not found - Ceph
Bug_#40926: "Command failed (workunit test rados/test.sh)" in rados - Ceph
Bug_#51576: qa/tasks/radosbench.py times out - Ceph - RADOS
Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology
Bug_#41949: test_access_permissions fails in tasks.mgr.dashboard.test_cephfs.CephfsTest - Ceph - Mgr - Dashboard
Bug_#52652: ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) - Ceph - Mgr
Bug_#46063: Could not find the requested service nrpe - Tools - Teuthology
wip-yuri-testing-2021-11-04-0731¶
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:14:19-rados-wip-yuri-testing-2021-11-04-0731-distro-basic-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-04_20:25:46-rados-wip-yuri-testing-2021-11-04-0731-distro-basic-smithi/
Failures unrelated, tracked in:
[6485385, 6485585, 6491076, 6491095] Test failure: test_ganesha (unittest.loader._FailedTest)
-- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
[6485484] Could not reconnect to ubuntu@smithi072.front.sepia.ceph.com
-- potentially related to https://tracker.ceph.com/issues/21317, but also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488467/
[6485488] Command failed on smithi072 with status 1: 'sudo yum install -y kernel'
-- https://tracker.ceph.com/issues/37657
[6485616] timeout expired in wait_until_healthy
-- potentially related to https://tracker.ceph.com/issues/45701; also seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488669/
[6485669] Command failed (workunit test rados/test.sh) on smithi072 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'
-- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488551/
[6491087, 6491102, 6485685] Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi013 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
-- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488803/
[6485698] Command failed (workunit test osd/osd-rep-recov-eio.sh) on smithi090 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-rep-recov-eio.sh'
-- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/
[6485733] Found coredumps on ubuntu@smithi038.front.sepia.ceph.com
-- seen in in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
[6485487, 6485492] SSH connection to smithi072 was lost: 'rpm -q kernel --last | head -n 1'
-- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488521/
[6485497, 6485547, 6485594, 6485649, 6485693] hit max job timeout
-- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488412/
Failure untracked, but likely not related:
[6485465, 6485471] 'get_status smithi050.front.sepia.ceph.com' reached maximum tries (10) after waiting for 32.5 seconds
-- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488581/; http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488587/
[6485513] machine smithi072.front.sepia.ceph.com is locked by scheduled_teuthology@teuthology, not scheduled_yuriw@teuthology
-- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488630/
[6485589] Command failed (workunit test cls/test_cls_lock.sh) on smithi038 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dfe919ae09055d470f758696db643d04ca0f304 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh'
-- same test passed in recent master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488706/; however, failed in a past master run: http://pulpito.front.sepia.ceph.com/teuthology-2021-09-26_07:01:03-rados-master-distro-basic-gibba/6408301/
[6485485] Error reimaging machines: 500 Server Error: Internal Server Error for url: http://fog.front.sepia.ceph.com/fog/host/191/task
-- similar test passed in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488601/
[6485495] {'smithi072.front.sepia.ceph.com': {'changed': False, 'msg': 'Data could not be sent to remote host "smithi072.front.sepia.ceph.com". Make sure this host can be reached over ssh: Warning: Permanently added \'smithi072.front.sepia.ceph.com,172.21.15.72\' (ECDSA) to the list of known hosts.\r\nubuntu@smithi072.front.sepia.ceph.com: Permission denied (publickey,password,keyboard-interactive).\r\n', 'unreachable': True}}
-- similar test passed in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488611/
[6485498] {'Failure object was': {'smithi072.front.sepia.ceph.com': {'msg': 'non-zero return code', 'cmd': ['semodule', '-i', '/tmp/nrpe.pp'], 'stdout': '', 'stderr': 'libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).\\nsemodule: Failed on /tmp/nrpe.pp!', 'rc': 1, 'start': '2021-11-05 14:21:58.104549', 'end': '2021-11-05 14:22:03.111651', 'delta': '0:00:05.007102', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'semodule -i /tmp/nrpe.pp', 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': 'None', 'chdir': 'None', 'executable': 'None', 'creates': 'None', 'removes': 'None', 'stdin': 'None'}}, 'stdout_lines': [], 'stderr_lines': ['libsemanage.semanage_get_lock: Could not get direct transaction lock at /var/lib/selinux/targeted/semanage.trans.LOCK. (Resource temporarily unavailable).', 'semodule: Failed on /tmp/nrpe.pp!'], '_ansible_no_log': False}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c49
98b99d2e4/virtualenv/lib/pyth
on3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types0](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types0](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_27954452159076fda5642b6c0eb0c4998b99d2e4/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'changed')"}
-- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:
01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488614/
[6485665] Error reimaging machines: Failed to power on smithi038
-- similar test passed in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488783/
Details:
Bug #21317: Update VPS with latest distro: RuntimeError: Could not reconnect to ubuntu@vpm129.front.sepia.ceph.com - Infrastructure - Sepia
Bug #37657: Command failed on smithi075 with status 1: 'sudo yum install -y kernel' - Ceph
Bug #45701: rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check - Ceph - Orchestrator
wip-pg-stats¶
http://pulpito.front.sepia.ceph.com/lflores-2021-11-08_21:48:32-rados-wip-pg-stats-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/lflores-2021-11-08_19:58:04-rados-wip-pg-stats-distro-default-smithi/
Failures unrelated, tracked in:
[6492777, 6492791, 6491519, 6491534] -- seen in master: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488501/
[6492779, 6492789, 6491522] -- tracked by https://tracker.ceph.com/issues/53206; seen in in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488712/
[6492785, 6492798, 6491527, 6491540] -- seen in master run: http://pulpito.front.sepia.ceph.com/yuriw-2021-11-06_17:01:58-rados-wip-yuri-testing-master-11.5.21-distro-basic-smithi/6488803/
Details:
Bug_#53206: Found coredumps on ubuntu@smithi115.front.sepia.ceph.com | IndexError: list index out of range - Tools - Teuthology
Updated by Pere Díaz Bou 3 months ago · 123 revisions