Ceph - v12.2.14 63% 27 issues (17 closed — 10 open) Related issues Bug #45698: PrioritizedQueue: messages in normal queue Bug #47204: ceph osd getting shutdown after joining to cluster Bug #48505: osdmaptool crush Bug #48855: OSD_SUPERBLOCK Checksum failed after node restart Bug #49409: osd run into dead loop and tell slow request when rollback snap with using cache tier Bug #49448: If OSD types are changed, pools rules can become unresolvable without providing health warnings
Ceph - v14.2.23 51% 39 issues (19 closed — 20 open) Related issues Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update Bug #55424: ceph-mon process exit in dead status , which backtrace displayed has blocked by compact_queue_thread
Ceph - v16.2.15 89% 229 issues (204 closed — 25 open) Related issues Bug #64311: pacific: reinforce spawn_worker of msg/async
v17.2.4 50% 4 issues (2 closed — 2 open) Related issues Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm instead of list of algorithms
v17.2.6 75% 4 issues (3 closed — 1 open) Related issues Bug #62872: ceph osd_max_backfills default value is 1000
Ceph - v17.2.8 54% 61 issues (33 closed — 28 open) Related issues Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
Ceph - v20.0.0 T release 13% 44 issues (4 closed — 40 open) Related issues Bug #64968: mon: MON_DOWN warnings when mons are first booting Bug #64972: qa: "ceph tell 4.3a deep-scrub" command not found