v12.2.14 63% 27 issues (17 closed — 10 open) Related issues Bug #45670: luminous: osd: too many store transactions when osd got an incremental osdmap but failed encode full with correct crc again and again CephFS - Bug #49503: standby-replay mds assert failed when replay mgr - Bug #49408: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #45698: PrioritizedQueue: messages in normal queue RADOS - Bug #47204: ceph osd getting shutdown after joining to cluster RADOS - Bug #48505: osdmaptool crush RADOS - Bug #48855: OSD_SUPERBLOCK Checksum failed after node restart RADOS - Bug #49409: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #49448: If OSD types are changed, pools rules can become unresolvable without providing health warnings rgw - Bug #45154: the command "radosgw-admin orphans list-jobs" failed
v14.2.23 51% 39 issues (19 closed — 20 open) Related issues Bug #54189: multisite: metadata sync will skip first child of pos_to_prev Bug #55461: ceph osd crush swap-bucket {old_host} {new_host} where {old_host}={new_host} crashes monitors Bug #56554: rgw::IAM::s3GetObjectTorrent never take effect Bug #57221: ceph warn (important) Bug #63337: monmap's features are sometimes 0 Bug #63429: librbd: mirror snapshot remove same snap_id twice Feature #55166: disable delte bucket from rgw bluestore - Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device ceph-volume - Bug #52340: ceph-volume: lvm activate: "tags" not defined ceph-volume - Bug #53136: The capacity used by the ceph cache layer pool exceeds target_max_bytes CephFS - Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1 mgr - Bug #51637: mgr/insights: mgr consumes excessive amounts of memory mgr - Bug #63804: mgr/restful module /request with body '{"prefix": "pg dump", "format": "json"}' fails with "access denied" RADOS - Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command RADOS - Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update RADOS - Bug #55424: ceph-mon process exit in dead status , which backtrace displayed has blocked by compact_queue_thread rbd - Bug #54027: The file system takes a long time to build with iscsi disk of rbd rgw - Bug #53431: When using radosgw-admin to create a user, when the uid is empty, the error message is unreasonable rgw - Bug #53668: Why not add a xxx.retry obJ to metadata synchronization at multisite for exception retries rgw - Bug #53708: ceph multisite sync deleted unversioned object failed rgw - Bug #53745: crash on null coroutine under RGWDataSyncShardCR::stop_spawned_services rgw - Bug #54254: when the remove-all parameter of rgw admin operation trim usage interface is set false, the usage is trimmed. rgw - Bug #55131: radosgw crashes at RGWIndexCompletionManager::create_completion rgw - Bug #58105: `DeleteBucketPolicy` can not delete policy in slave zonegroup rgw - Bug #58721: rgw_rename lead to librgw.so segment fault rgw - Bug #61817: Ceph swift error: create container return 404; rgw - Feature #53455: [RFE] Ill-formatted JSON response from RGW
v16.2.15 89% 229 issues (204 closed — 25 open) Related issues Bug #63327: compiler cython error Bug #63345: install_dep.sh error Bug #63493: Problem with Pgs Deep-scrubbing ceph Bug #64256: "Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []" in pacific-x-quincy Bug #64279: "Error ENOTSUP: Warning: due to ceph-mgr restart" in octopus-x/pacific suite bluestore - Bug #63606: pacific: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/2 triggers FAILED ceph_assert(_kv_only || mounted) CephFS - Bug #61732: pacific: test_cluster_info fails from "No daemons reported" RADOS - Bug #64311: pacific: reinforce spawn_worker of msg/async rbd - Bug #62586: TestClsRbd.mirror_snapshot failure in pacific p2p rgw - Bug #63177: RGW user quotas is not honored when bucket owner is different than uploader rgw - Bug #64203: RGW S3: list bucket results in a 500 Error when object-lock is enabled
RADOS - v17.2.4 50% 4 issues (2 closed — 2 open) Related issues RADOS - Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm instead of list of algorithms
RADOS - v17.2.6 75% 4 issues (3 closed — 1 open) Related issues RADOS - Bug #62872: ceph osd_max_backfills default value is 1000
v17.2.8 54% 61 issues (33 closed — 28 open) Related issues Bug #63918: 17.2.7 ceph-volume errors out if no valid s bluestore - Bug #64444: No Valid allocation info on disk (empty file) ceph-volume - Bug #64560: ceph-volume: when create osd, vgcreate stderr failed to find PV Orchestrator - Bug #64424: Ceph orch unsuitable for stateless / RAM-booted hosts RADOS - Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs rgw - Bug #61710: quincy/pacific: PUT requests during reshard of versioned bucket fail with 404 and leave behind dark data rgw - Bug #64527: Radosgw 504 timeouts & Garbage collection is frozen
v18.2.3 80% 35 issues (28 closed — 7 open) Related issues Bug #64295: Ceph exporter does not produce usable RGW metrics in k8 envs CephFS - Bug #64441: reef: qa: add upgrade testing from minor release of reef (v18.2.[01]) to reef HEAD
v19.1.0 78% 23 issues (18 closed — 5 open) Related issues CephFS - Bug #55725: MDS allows a (kernel) client to exceed the xattrs key/value limits CephFS - Bug #64490: mds: some request errors come from errno.h rather than fs_types.h CephFS - Bug #64503: client: log message when unmount call is received CephFS - Bug #64748: reef: snaptest-git-ceph.sh failure CephFS - Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes Dashboard - Feature #64890: mgr/dashboard: update NVMe-oF API
v20.0.0 T release 13% 44 issues (4 closed — 40 open) Related issues CephFS - Bug #48562: qa: scrub - object missing on disk; some files may be lost CephFS - Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details CephFS - Bug #63866: mount command returning misleading error message CephFS - Bug #64008: mds: CInode::item_caps used in two different lists CephFS - Bug #64477: pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.XXX' denied CephFS - Bug #64486: qa: enhance labeled perf counters test for cephfs-mirror CephFS - Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main CephFS - Bug #64537: mds: lower the log level when rejecting a session reclaim request CephFS - Bug #64542: Difference in error code returned while removing system xattrs using removexattr() CephFS - Bug #64563: mds: enhance laggy clients detections due to laggy OSDs CephFS - Bug #64572: workunits/fsx.sh failure CephFS - Bug #64602: tools/cephfs: cephfs-journal-tool does not recover dentries with alternate_name CephFS - Bug #64616: selinux denials with centos9.stream CephFS - Bug #64641: qa: Add multifs root_squash testcase CephFS - Bug #64685: mds: disable defer_client_eviction_on_laggy_osds by default CephFS - Bug #64700: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) CephFS - Bug #64707: suites/fsstress.sh hangs on one client - test times out CephFS - Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring) CephFS - Bug #64717: MDS stuck in replay/resolve use CephFS - Bug #64729: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log CephFS - Bug #64730: fs/misc/multiple_rsync.sh workunit times out CephFS - Bug #64746: qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to health ignorelist. CephFS - Bug #64747: postgresql pkg install failure CephFS - Bug #64751: cephfs-mirror coredumped when acquiring pthread mutex CephFS - Bug #64752: cephfs-mirror: valgrind report leaks CephFS - Bug #64761: cephfs-mirror: add throttling to mirror daemon ops CephFS - Bug #64912: make check: QuiesceDbTest.MultiRankRecovery Failed CephFS - Bug #64947: qa: fix continued use of log-whitelist CephFS - Feature #63663: mds,client: add crash-consistent snapshot support CephFS - Feature #63664: mds: add quiesce protocol for halting I/O on subvolumes CephFS - Feature #63665: mds: QuiesceDb to manage subvolume quiesce state CephFS - Feature #63666: mds: QuiesceAgent to execute quiesce operations on an MDS rank CephFS - Feature #63668: pybind/mgr/volumes: add quiesce protocol API CephFS - Feature #64506: qa: update fs:upgrade to test from reef/squid to main CephFS - Feature #64507: pybind/mgr/snap_schedule: support crash-consistent snapshots CephFS - Feature #64531: mds,mgr: identify metadata heavy workloads CephFS - Tasks #63707: mds: AdminSocket command to control the QuiesceDbManager CephFS - Tasks #63708: mds: MDS message transport for inter-rank QuiesceDbManager communications mgr - Bug #64799: mgr: update cluster state for new maps from the mons before notifying modules RADOS - Bug #64968: mon: MON_DOWN warnings when mons are first booting RADOS - Bug #64972: qa: "ceph tell 4.3a deep-scrub" command not found nvme-of - Fix #64821: cephadm - make changes to ceph-nvmeof.conf template nvme-of - Feature #64777: mon: add NVMe-oF gateway monitor and HA rgw - Bug #64875: rgw: rgw-restore-bucket-index -- sort uses specified temp dir