Project

General

Profile

Actions

SQUID

Summaries are ordered latest --> oldest.

https://tracker.ceph.com/issues/65859

Failures, unrelated:
1. https://tracker.ceph.com/issues/59380 - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
2. https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean - RADOS
3. https://tracker.ceph.com/issues/63531 - Error authenticating with smithiXXX.front.sepia.ceph.com: SSHException('No existing session') (No SSH private key found!) - Infrastructure
4. https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - RADOS
5. https://tracker.ceph.com/issues/65017 - cephadm: log_channel(cephadm) log [ERR] : Failed to connect to smithi090 (10.0.0.9). Permission denied - Orchestrator
6. https://tracker.ceph.com/issues/61850 - LibRadosWatchNotify.AioNotify: malloc(): unaligned tcache chunk detected - RADOS
7. https://tracker.ceph.com/issues/65183 - Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force" - RADOS
8. https://tracker.ceph.com/issues/65860 - Upgrade test re-opts into new telemetry collections too late - Mgr
9. https://tracker.ceph.com/issues/58223 - failure on `sudo fuser -v /var/lib/dpkg/lock-frontend` - Infrastructure

https://tracker.ceph.com/issues/65594

Failures, unrelated:
1. https://tracker.ceph.com/issues/59380 - rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
2. https://tracker.ceph.com/issues/56770 - crash: void OSDShard::register_and_wake_split_child(PG*): assert(p != pg_slots.end()) - RADOS
3. https://tracker.ceph.com/issues/65732 - rados/cephadm/osds: job times out during nvme_loop interval - Orchestrator
4. https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - RADOS
5. https://tracker.ceph.com/issues/65183 - Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force" - RADOS
6. https://tracker.ceph.com/issues/65017 - cephadm: log_channel(cephadm) log [ERR] : Failed to connect to smithi090 (10.0.0.9). Permission denied - Orchestrator
7. https://tracker.ceph.com/issues/65852 - ceph_test_rados command hits ceph_abort when trying to delete op - RADOS
8. https://tracker.ceph.com/issues/64437 - qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13 - RADOS
9. https://tracker.ceph.com/issues/65860 - Upgrade test re-opts into new telemetry collections too late - Mgr
10. https://tracker.ceph.com/issues/63789 - LibRadosIoEC test failure - RADOS
11. https://tracker.ceph.com/issues/50371 - Segmentation fault (core dumped) ceph_test_rados_api_watch_notify_pp - RADOS
12. https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean - RADOS

https://tracker.ceph.com/issues/65688

Failures, unrelated:
  1. https://tracker.ceph.com/issues/65391
  2. https://tracker.ceph.com/issues/61774
  3. https://tracker.ceph.com/issues/64437
  4. https://tracker.ceph.com/issues/65765 -- New
  5. https://tracker.ceph.com/issues/65183
  6. https://tracker.ceph.com/issues/64725
  7. https://tracker.ceph.com/issues/59196
  8. https://tracker.ceph.com/issues/65768 -- New
Details:
  1. squid: osd/scrub: "reservation requested while still reserved" error in cluster log
  2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
  3. qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
  4. squid: rados/test.sh: LibRadosWatchNotifyECPP.WatchNotify test of api_watch_notify_pp suite didn't complete. -- NEW
  5. Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force"
  6. rados/singleton: application not enabled on pool 'rbd'
  7. ceph_test_lazy_omap_stats segfault while waiting for active+clean
  8. rados/verify: Health check failed: 1 osds down (OSD_DOWN)" in cluster log -- NEW

https://tracker.ceph.com/issues/65655

Failures, unrelated:
7674283, 7674350, 7674412, 7674538, 7674476: Valgrind (known issue. Centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons): https://tracker.ceph.com/issues/61774
7674286 - a scrub status reporting issue ("not all pgs scrubbed"). A scrub error, but unrelated. A new tracker will be opened.
7674288 - TEST_repair_stats(13=26). https://tracker.ceph.com/issues/64437
7674289, 7674276, 7674289, 7674524 - infra (ceph-radosgw installation failed). https://tracker.ceph.com/issues/65448
7674292 - wait_for_recovery: failed before timeout expired (rados_api_tests) https://tracker.ceph.com/issues/63198
7674307, 7674378, 7674514, 7674448 - Overriding an EC pool needs the "--yes-i-really-mean-it" flag. https://tracker.ceph.com/issues/65183
7674319, 7674463, 7674498 - telemetry requires opt-in. https://tracker.ceph.com/issues/64458
7674320 - a timed-out valgrind run. Unrelated.
7674401 7674540 - ceph_test_lazy_omap_stats. https://tracker.ceph.com/issues/59196
7674486 - unrelated. No OSD logs.
7674530 - dashboard-e2e. Failure in ‘01-hosts.e2e-spec.ts’. https://tracker.ceph.com/issues/61786

https://tracker.ceph.com/issues/65510

Failures, unrelated:
7659275, 7659345, 7659406, 7659407, 7659470 - https://tracker.ceph.com/issues/61774
7659280 - https://tracker.ceph.com/issues/64437
7659281 - Failed to connect to smithi080 (172.21.15.80). Permission denied
7659285 - https://tracker.ceph.com/issues/62839
7659292 - https://tracker.ceph.com/issues/52109
7659300, 7659372, 7659443, 7659482, 7659512 - https://tracker.ceph.com/issues/65183
7659304 - https://tracker.ceph.com/issues/65448
7659305 - https://tracker.ceph.com/issues/65521
7659312, 7659457 - https://tracker.ceph.com/issues/65186
7659395, 7659539 - https://tracker.ceph.com/issues/59196
7659537 - https://tracker.ceph.com/issues/65449
7659542 - https://tracker.ceph.com/issues/44510
New Failures, unreleated:
7659455, 7659310 - https://tracker.ceph.com/issues/65567

https://tracker.ceph.com/issues/65385

Summary of rados + upgrade suite; hence the larger list

Failures, unrelated:
1. https://tracker.ceph.com/issues/65234
2. https://tracker.ceph.com/issues/65235
3. https://tracker.ceph.com/issues/64502
4. https://tracker.ceph.com/issues/64868
5. https://tracker.ceph.com/issues/64460
6. https://tracker.ceph.com/issues/64707
7. https://tracker.ceph.com/issues/65231
8. https://tracker.ceph.com/issues/65189
9. https://tracker.ceph.com/issues/64458
10. https://tracker.ceph.com/issues/65185
11. https://tracker.ceph.com/issues/65421
12. https://tracker.ceph.com/issues/65422
13. https://tracker.ceph.com/issues/65233
14. https://tracker.ceph.com/issues/65183
15. https://tracker.ceph.com/issues/61774
16. https://tracker.ceph.com/issues/58130
17. https://tracker.ceph.com/issues/56620
18. https://tracker.ceph.com/issues/52109
19. https://tracker.ceph.com/issues/64437
20. https://tracker.ceph.com/issues/59196
21. https://tracker.ceph.com/issues/64377
22. https://tracker.ceph.com/issues/65448
23. https://tracker.ceph.com/issues/65449
24. https://tracker.ceph.com/issues/65450

Details:
1. upgrade/quincy-x/stress-split: cephadm failed to parse grafana.ini file due to inadequate permission - Ceph - Orchestrator
2. upgrade/reef-x/stress-split: "OSDMAP_FLAGS: noscrub flag(s) set" warning in cluster log - Ceph - RADOS
3. pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main - Ceph - CephFS
4. cephadm/osds, cephadm/workunits: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) in cluster log - Ceph - RADOS
5. rados/upgrade/parallel: "[WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log - Ceph - RADOS
6. suites/fsstress.sh hangs on one client - test times out - Ceph - CephFS
7. upgrade/quincy-x/parallel: "Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log - Ceph - RADOS
8. Telemetry pacific-x upgrade test pauses when upgrading to squid - Ceph - Mgr
9. rados/upgrade/parallel: [WRN] TELEMETRY_CHANGED: Telemetry requires re-opt-in - Ceph - Mgr
10. OSD_SCRUB_ERROR, inconsistent pg in upgrade tests - Ceph - RADOS
11. upgrade/reef-x/stress-split: TestMigration.StressLive failure - Ceph - RBD
12. upgrade/quincy-x/parallel: "1 pg degraded (PG_DEGRADED)" in cluster log - Ceph - RADOS
13. upgrade/cephfs/mds_upgrade_sequence: 'ceph orch ps' command times out - Ceph - Orchestrator
14. Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force" - Ceph - RADOS
15. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
16. LibRadosAio.SimpleWrite hang and pkill - Ceph - RADOS
17. Deploy a ceph cluster with cephadm,using ceph-volume lvm create command to create osd can not managed by cephadm - Ceph - Ceph-Volume
18. test_cephadm.sh: Timeout('Port 8443 not free on 127.0.0.1.',) - Ceph - Orchestrator
19. qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13 - Ceph - RADOS
20. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
21. tasks/e2e: Modular dependency problems - Ceph - Mgr - Dashboard
22. Teuthology unable to find the "ceph-radosgw" package - Ceph - RGW
23. NeoRadosWatchNotify.WatchNotifyTimeout failed due to nonexistent pool - Ceph - RADOS
24. rados/thrash-old-clients: "PG_BACKFILL: Low space hindering backfill" warning in cluster log - Ceph - RADOS

https://tracker.ceph.com/issues/65250

Failures, unrelated:
1. https://tracker.ceph.com/issues/61774
2. https://tracker.ceph.com/issues/64434
3. https://tracker.ceph.com/issues/59196
4. https://tracker.ceph.com/issues/65183
5. https://tracker.ceph.com/issues/59380
6. https://tracker.ceph.com/issues/64347
7. https://tracker.ceph.com/issues/64726
8. https://tracker.ceph.com/issues/64437
9. https://tracker.ceph.com/issues/64374
10. https://tracker.ceph.com/issues/65017

Details:
1. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
2. rados/cephadm/osds: [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s) - Ceph - Orchestrator
3. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
4. Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force" - Ceph - RADOS
5. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
6. src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps) - Ceph - RADOS
7. LibRadosAioEC.MultiWritePP hang and pkill - Ceph - RADOS
8. qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13 - Ceph - RADOS
9. Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: No module named 'mgr_module' (pass --force to force enablement) - Ceph - Orchestrator
10. cephadm: log_channel(cephadm) log [ERR] : Failed to connect to smithi090 (10.0.0.9). Permission denied - Ceph - Orchestrator

https://tracker.ceph.com/issues/64973

Failures, unrelated:
1. https://tracker.ceph.com/issues/59380
2. https://tracker.ceph.com/issues/61774
3. https://tracker.ceph.com/issues/64434
4. https://tracker.ceph.com/issues/64437
5. https://tracker.ceph.com/issues/64057
6. https://tracker.ceph.com/issues/62535
7. https://tracker.ceph.com/issues/64118
8. https://tracker.ceph.com/issues/59196
9. https://tracker.ceph.com/issues/54439

Details:
1. rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FULL)" and "Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)" - Ceph - RADOS
2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons - Ceph - RADOS
3. rados/cephadm/osds: [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s) - Ceph - Orchestrator
4. qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13 - Ceph - RADOS
5. task/test_cephadm_timeout - failed with timeout - Ceph - Orchestrator
6. cephadm: wait for healthy state times out because cephadm agent is down - Ceph - Orchestrator
7. cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://download.ceph.com/debian-quincy jammy Release' does not have a Release file. - Ceph - Orchestrator
8. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS
9. LibRadosWatchNotify.WatchNotify2Multi fails - Ceph - RADOS

Updated by Laura Flores 1 day ago · 11 revisions