v12.2.14 61% 28 issues (17 closed — 11 open) Related issues Bug #45670: luminous: osd: too many store transactions when osd got an incremental osdmap but failed encode full with correct crc again and again CephFS - Bug #49503: standby-replay mds assert failed when replay mgr - Bug #49408: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #45698: PrioritizedQueue: messages in normal queue RADOS - Bug #47204: ceph osd getting shutdown after joining to cluster RADOS - Bug #48505: osdmaptool crush RADOS - Bug #48855: OSD_SUPERBLOCK Checksum failed after node restart RADOS - Bug #49409: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #49448: If OSD types are changed, pools rules can become unresolvable without providing health warnings rgw - Bug #45154: the command "radosgw-admin orphans list-jobs" failed rgw - Bug #46563: Metadata synchronization failed,"metadata is behind on 1 shards" appear
v13.2.11 67% 6 issues (4 closed — 2 open) Related issues RADOS - Bug #47626: process will crash by invalidate pointer rbd - Bug #48999: Data corruption with rbd_balance_parent_reads and rbd_balance_snap_reads set to true.
v14.2.23 55% 31 issues (16 closed — 15 open) Related issues Bug #54189: multisite: metadata sync will skip first child of pos_to_prev Bug #55461: ceph osd crush swap-bucket {old_host} {new_host} where {old_host}={new_host} crashes monitors Bug #56554: rgw::IAM::s3GetObjectTorrent never take effect Feature #55166: disable delte bucket from rgw bluestore - Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device ceph-volume - Bug #52340: ceph-volume: lvm activate: "tags" not defined ceph-volume - Bug #53136: The capacity used by the ceph cache layer pool exceeds target_max_bytes CephFS - Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1 mgr - Bug #51637: mgr/insights: mgr consumes excessive amounts of memory RADOS - Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command RADOS - Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update RADOS - Bug #55424: ceph-mon process exit in dead status , which backtrace displayed has blocked by compact_queue_thread rbd - Bug #54027: The file system takes a long time to build with iscsi disk of rbd rgw - Bug #53431: When using radosgw-admin to create a user, when the uid is empty, the error message is unreasonable rgw - Bug #53668: Why not add a xxx.retry obJ to metadata synchronization at multisite for exception retries rgw - Bug #53708: ceph multisite sync deleted unversioned object failed rgw - Bug #53745: crash on null coroutine under RGWDataSyncShardCR::stop_spawned_services rgw - Bug #54254: when the remove-all parameter of rgw admin operation trim usage interface is set false, the usage is trimmed. rgw - Bug #55131: radosgw crashes at RGWIndexCompletionManager::create_completion rgw - Feature #53455: [RFE] Ill-formatted JSON response from RGW
v16.2.11 82% 45 issues (36 closed — 9 open) Related issues Bug #56466: pacific: boost 1.73.0 is incompatible with python 3.10 Bug #57055: The osd_memory_target parameter does not take effect Bug #57056: The performance of the three mon is very different ceph-volume - Bug #56538: the function get_first_lv in lvm api is not defined ceph-volume - Bug #56614: ceph-volume simple scan can not used deivce part ceph-volume - Bug #56620: Deploy a ceph cluster with cephadm,using ceph-volume lvm create command to create osd can not managed by cephadm CephFS - Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDataScan) CephFS - Bug #56507: pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) CephFS - Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2] Dashboard - Bug #56062: mgr/dashboard: Update i18n messages from Transifex Orchestrator - Bug #56508: haproxy check fails for ceph-grafana service Orchestrator - Bug #56886: ceph orch deamon osd add can not apply to partion of device,but ceph-volume lvm create command cloud. rgw - Bug #55766: S3 Object Lock not Working rgw - Bug #56510: bucket index objects still exist after bucket deletion
v17.2.4 87% 39 issues (34 closed — 5 open) Related issues Bug #57106: ceph 17 fails to build with arrow 9 ceph-volume - Bug #57085: inventory a device get_partitions_facts called many times Dashboard - Bug #57114: mgr/dashboard: Squash is not mandatory field in "Create NFS export" page Orchestrator - Bug #57191: [cephadm] os tuned profile setting names are not validated from cephadm side Orchestrator - Bug #57192: [cephadm] os tuned profile placement hosts are not validated
v18.0.0 Reef 28% 210 issues (58 closed — 152 open) Related issues Bug #55107: Getting "Could NOT find utf8proc (missing: utf8proc_LIB)" error while building from master branch Bug #55351: ceph-mon crash in handle_forward when add new message type Bug #56480: std::shared_mutex deadlocks on Windows Bug #56945: python: upgrade to 3.8 and/or 3.9 Bug #57138: mgr(snap-schedule): may TypeError in rm_schedule Feature #51537: use git `Prepare Commit Message` hook to add component in commit title Documentation #55530: teuthology-suite -k option doesn't always override kernel CephFS - Bug #23724: qa: broad snapshot functionality testing across clients CephFS - Bug #24894: client: allow overwrites to files with size greater than the max_file_size config CephFS - Bug #46438: mds: add vxattr for querying inherited layout CephFS - Bug #48673: High memory usage on standby replay MDS CephFS - Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... CephFS - Bug #51278: mds: "FAILED ceph_assert(!segments.empty())" CephFS - Bug #52982: client: Inode::hold_caps_until should be a time from a monotonic clock CephFS - Bug #53504: client: infinite loop "got ESTALE" after mds recovery CephFS - Bug #53573: qa: test new clients against older Ceph clusters CephFS - Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version()) CephFS - Bug #53979: mds: defer prefetching the dirfrags to speed up MDS rejoin CephFS - Bug #53996: qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or quincy instead of pacific CephFS - Bug #54017: Problem with ceph fs snapshot mirror and read-only folders CephFS - Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps CephFS - Bug #54049: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expected to add in /proc/self/mounts and command should return failure CephFS - Bug #54052: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart CephFS - Bug #54066: mgr/volumes: uid/gid of the clone is incorrect CephFS - Bug #54081: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v16.2.4 -> v16.2.[567] CephFS - Bug #54106: kclient: hang during workunit cleanup CephFS - Bug #54107: kclient: hang during umount CephFS - Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" CephFS - Bug #54111: data pool attached to a file system can be attached to another file system CephFS - Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs) CephFS - Bug #54345: mds: try to reset heartbeat when fetching or committing. CephFS - Bug #54384: mds: crash due to seemingly unrecoverable metadata error CephFS - Bug #54459: fs:upgrade fails with "hit max job timeout" CephFS - Bug #54460: snaptest-multiple-capsnaps.sh test failure CephFS - Bug #54461: ffsb.sh test failure CephFS - Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied CephFS - Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created and deleted CephFS - Bug #54557: scrub repair does not clear earlier damage health status CephFS - Bug #54560: snap_schedule: avoid throwing traceback for bad or missing arguments CephFS - Bug #54606: check-counter task runs till max job timeout CephFS - Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression? CephFS - Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDRequestRef&): assert(dnl->get_inode() == in) CephFS - Bug #54760: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null()) CephFS - Bug #54971: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) CephFS - Bug #54976: mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery) CephFS - Bug #55110: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to older kernel CephFS - Bug #55112: cephfs-shell: saving files doesn't work as expected CephFS - Bug #55134: ceph pacific fails to perform fs/mirror test CephFS - Bug #55148: snap_schedule: remove subvolume(-group) interfaces CephFS - Bug #55165: client: validate pool against pool ids as well as pool names CephFS - Bug #55170: mds: crash during rejoin (CDir::fetch_keys) CephFS - Bug #55173: qa: missing dbench binary? CephFS - Bug #55196: mgr/stats: perf stats command doesn't have filter option for fs names. CephFS - Bug #55216: cephfs-shell: creates directories in local file system even if file not found CephFS - Bug #55217: pybind/mgr/volumes: Clone operation hangs CephFS - Bug #55234: snap_schedule: replace .snap with the client configured snap dir name CephFS - Bug #55236: qa: fs/snaps tests fails with "hit max job timeout" CephFS - Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS CephFS - Bug #55242: cephfs-shell: put command should accept both path mandatorily and validate local_path CephFS - Bug #55313: Unexpected file access behavior using ceph-fuse CephFS - Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MDSes) CephFS - Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) CephFS - Bug #55464: cephfs: mds/client error when client stale reconnect CephFS - Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)" CephFS - Bug #55537: mds: crash during fs:upgrade test CephFS - Bug #55538: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) CephFS - Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel" CephFS - Bug #55620: ceph pacific fails to perform fs/multifs test CephFS - Bug #55710: cephfs-shell: exit code unset when command has missing argument CephFS - Bug #55725: MDS allows a (kernel) client to exceed the xattrs key/value limits CephFS - Bug #55759: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes CephFS - Bug #55762: mgr/volumes: Handle internal metadata directories under '/volumes' properly. CephFS - Bug #55778: client: choose auth MDS for getxattr with the Xs caps CephFS - Bug #55779: fuse client losing connection to mds CephFS - Bug #55807: qa failure: workload iogen failed CephFS - Bug #55822: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command CephFS - Bug #55824: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536 CephFS - Bug #55842: Upgrading to 16.2.9 with 9M strays files causes MDS OOM CephFS - Bug #55858: Pacific 16.2.7 MDS constantly crashing CephFS - Bug #55861: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metrics.TestMDSMetrics) CephFS - Bug #55897: test_nfs: update of export's access type should not trigger NFS service restart CephFS - Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure CephFS - Bug #55980: mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5 CephFS - Bug #56003: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0) CephFS - Bug #56010: xfstests-dev generic/444 test failed CephFS - Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison CephFS - Bug #56012: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) CephFS - Bug #56063: Snapshot retention config lost after mgr restart CephFS - Bug #56067: Cephfs data loss with root_squash enabled CephFS - Bug #56116: mds: handle deferred client request core when mds reboot CephFS - Bug #56249: crash: int Client::_do_remount(bool): abort CephFS - Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self) CephFS - Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self) CephFS - Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_SCAN) CephFS - Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output CephFS - Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) CephFS - Bug #56529: ceph-fs crashes on getfattr CephFS - Bug #56537: cephfs-top: wrong/infinitely changing wsp values CephFS - Bug #56577: mds: client request may complete without queueing next replay request CephFS - Bug #56626: "ceph fs volume create" fails with error ERANGE CephFS - Bug #56632: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError CephFS - Bug #56633: mds: crash during construction of internal request CephFS - Bug #56644: qa: test_rapid_creation fails with "No space left on device" CephFS - Bug #56666: mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon CephFS - Bug #56694: qa: avoid blocking forever on hung umount CephFS - Bug #56697: qa: fs/snaps fails for fuse CephFS - Bug #56698: client: FAILED ceph_assert(_size == 0) CephFS - Bug #56808: crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty()) CephFS - Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory() CephFS - Bug #57014: cephfs-top: add an option to dump the computed values to stdout CephFS - Bug #57044: mds: add some debug logs for "crash during construction of internal request" CephFS - Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay CephFS - Bug #57071: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients() CephFS - Bug #57126: client: abort the client daemons when we couldn't invalidate the dentry caches from kernel CephFS - Bug #57204: MDLog.h: 99: FAILED ceph_assert(!segments.empty()) CephFS - Bug #57205: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) CephFS - Bug #57206: ceph_test_libcephfs_reclaim crashes during test CephFS - Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread CephFS - Fix #54317: qa: add testing in fs:workload for different kinds of subvolumes CephFS - Feature #41824: mds: aggregate subtree authorities for display in `fs top` CephFS - Feature #50150: qa: begin grepping kernel logs for kclient warnings/failures to fail a test CephFS - Feature #54237: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path (in msg) while raising exception from opendir() in cephfs.pyx CephFS - Feature #54374: mgr/snap_schedule: include timezone information in scheduled snapshots CephFS - Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes CephFS - Feature #55041: mgr/volumes: display in-progress clones for a snapshot CephFS - Feature #55121: cephfs-top: new options to limit and order-by CephFS - Feature #55214: mds: add asok/tell command to clear stale omap entries CephFS - Feature #55215: mds: fragment directory snapshots CephFS - Feature #55401: mgr/volumes: allow users to add metadata (key-value pairs) for subvolume snapshot CephFS - Feature #55414: mds:asok interface to cleanup permanently damaged inodes CephFS - Feature #55463: cephfs-top: allow users to chose sorting order CephFS - Feature #55470: qa: postgresql test suite workunit CephFS - Feature #55715: pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds CephFS - Feature #55821: pybind/mgr/volumes: interface to check the presence of subvolumegroups/subvolumes. CephFS - Feature #55940: quota: accept values in human readable format as well CephFS - Feature #56058: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags CephFS - Feature #56140: cephfs: tooling to identify inode (metadata) corruption CephFS - Feature #56442: mds: build asok command to dump stray files and associated caps CephFS - Feature #56489: qa: test mgr plugins with standby mgr failover CephFS - Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting CephFS - Feature #57091: mds: modify scrub to catch dentry corruption CephFS - Cleanup #54362: client: do not release the global snaprealm until unmounting CephFS - Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work CephFS - Documentation #56730: doc: update snap-schedule notes regarding 'start' time cephsqlite - Bug #55304: libcephsqlite: crash when compiled with gcc12 cause of regex treating '-' as a range operator cephsqlite - Bug #56274: crash: pthread_mutex_lock() cephsqlite - Documentation #57127: doc: add debugging documentation Linux kernel client - Bug #54067: fs/maxentries.sh test fails with "2022-01-21T12:47:05.490 DEBUG:teuthology.orchestra.run:got remote process result: 124" Linux kernel client - Bug #55258: lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs Linux kernel client - Bug #55377: kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once mgr - Bug #53951: cluster [ERR] Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log mgr - Bug #53986: mgr/prometheus: The size of the export is not tracked as a metric returned to Prometheus mgr - Bug #55029: mgr/prometheus: ceph_mon_metadata is not consistently populating the ceph_version mgr - Bug #56671: zabbix module does not process some config options correctly mgr - Bug #56672: 'ceph zabbix send' can block (mon) ceph commands and messages Dashboard - Bug #53950: mgr/dashboard: Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log Dashboard - Bug #55133: mgr/dashboard: Error message of /api/grafana/validation is not helpful Dashboard - Bug #55578: mgr/dashboard: Creating and editing Prometheus AlertManager silences is buggy Dashboard - Bug #55604: mgr/dashboard: form field validation icons overlap with other icons Dashboard - Bug #55837: mgr/dashboard: After several days of not being used, Dashboard HTTPS website hangs during loading, with no errors Dashboard - Bug #57005: mgr/dashboard: Cross site scripting in Angular <11.0.5 (CVE-2021-4231) Dashboard - Feature #55520: mgr/dashboard: Add `location` field to [ POST /api/host ] Dashboard - Cleanup #54991: mgr/dashboard: don't log HTTP 3xx as errors Orchestrator - Bug #54026: the sort sequence used by 'orch ps' is not in a natural sequence Orchestrator - Bug #54028: alertmanager clustering is not configured consistently Orchestrator - Bug #54311: cephadm/monitoring: monitoring stack versions are too old Orchestrator - Bug #55595: cephadm: prometheus: The generatorURL in alerts is only using hostname Orchestrator - Bug #55638: alertmanager webhook urls may lead to 404 Orchestrator - Bug #55673: mgr/cephadm: Deploying a cluster with the Vagrantfile fails Orchestrator - Bug #56000: task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` Orchestrator - Bug #56024: cephadm: removes ceph.conf during qa run causing command failure Orchestrator - Bug #56667: cephadm install fails: apt:stderr E: Unable to locate package cephadm Orchestrator - Bug #56696: admin keyring disappears during qa run Orchestrator - Feature #54308: monitoring/prometheus: mgr/cephadm should support a data retention spec for prometheus data Orchestrator - Feature #54309: cephadm/monitoring: Update cephadm web endpoint to provide scrape configuration information to Prometheus Orchestrator - Feature #54310: cephadm: allow services to have dependencies on rbd Orchestrator - Feature #54391: orch/cephadm: upgrade status output could be improved to make progress more transparent Orchestrator - Feature #54392: orch/cephadm: Add a 'history' subcommand to the orch upgrade command Orchestrator - Feature #55489: cephadm: Improve gather facts to tolerate mpath device configurations Orchestrator - Feature #55551: device ls-lights should include the host where the devices are Orchestrator - Feature #55576: [RFE] Add a rescan subcommand to the orch device command Orchestrator - Feature #55777: Add server serial number information to cephadm gather-facts subcommand Orchestrator - Feature #56178: [RFE] add a --force or --yes-i-really-mean-it to ceph orch upgrade Orchestrator - Feature #56179: [RFE] Our prometheus instance should scrape itself RADOS - Bug #44092: mon: config commands do not accept whitespace style config name RADOS - Bug #52513: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15 RADOS - Bug #53729: ceph-osd takes all memory before oom on boot RADOS - Bug #55905: Failed to build rados.cpython-310-x86_64-linux-gnu.so RADOS - Bug #57152: segfault in librados via libcephsqlite RADOS - Feature #54580: common/options: add FLAG_SECURE to Ceph options rbd - Bug #57066: rbd snap list not change the last read when more than 64 group snaps rgw - Bug #50974: rgw: storage class: GLACIER lifecycle don't worked when STANDARD pool and GLACIER pool are equal rgw - Bug #55476: rgw: remove entries from bucket index shards directly in limited cases rgw - Bug #55477: Gloal Ratelilmit is overriding the per user ratelimit rgw - Bug #55546: rgw: trigger dynamic reshard on index entry count rather than object count rgw - Bug #55547: rgw: figure out what to do with "--check-objects" option to radosgw-admin rgw - Bug #55618: RGWRados::check_disk_state no checking object's storage_class rgw - Bug #55619: rgw: input args poolid and epoch of fun RGWRados::Bucket::UpdateIndex::complete_del shold belong to index_pool rgw - Bug #55655: rgw: clean up linking targets to radosgw-admin rgw - Bug #55904: RGWRados::check_disk_state no checking object's appendable attr rgw - Bug #56536: cls_rgw: nonexists object shoud not be accounted when check_index rgw - Bug #56673: rgw: 'bucket check' deletes index of multipart meta when its pending_map is noempty rgw - Fix #54174: rgw dbstore test env init wrong rgw - Feature #51017: rgw: beast: lack of 302 http -> https redirects rgw - Feature #54476: rgw: allow S3 delete-marker behavior to be restored via config rgw - Feature #55769: rgw: allow `radosgw-admin bucket stats` report more accurately cleanup - Tasks #57172: Yield Context Threading