v19.0.0 Due in about 3 months (03/01/2024) Squid dev 29% 406 issues (107 closed — 299 open) Related issues Bug #58303: active mgr crashes with segfault when running 'ceph osd purge' Bug #58975: mon: do not erroneously propose on error in ::prepare_update Bug #59624: pybind/ceph_argparse: Error message is not descriptive for ceph tell command Bug #61400: valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc Bug #62047: delete image fail on dashboard while it has some protected snapshots Bug #62203: gcc-+12: FTBFS random_shuffle is deprecated: use 'std::shuffle' instead Bug #62428: cmake: rebuild picks up newer python when originally built with older one Bug #62627: global: core fatal signal handler uses may signal-unsafe functions Bug #62778: cmake: BuildFIO.cmake should not introduce -std=gnu++17 Bug #63218: cmake: dependency ordering error for liburing and librocksdb Bug #63288: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid Bug #63494: all: daemonizing may release CephContext:: _fork_watchers_lock when its already unlocked Bug #63557: NVMe-oF gateway prometheus endpoints Feature #63344: Set and manage nvmeof gw - controller ids ranges Feature #63574: support setting quota in the format of {K|M}iB along with the K|M, {K|M}i Documentation #63354: Add all mon_cluster_log_* configs to docs bluestore - Bug #61466: Add bluefs write op count metrics bluestore - Bug #63353: resharding RocksDB after upgrade to Pacific breaks OSDs bluestore - Bug #63436: Typo in reshard example ceph-volume - Bug #58569: Add the ability to configure options for ceph-volume to pass to cryptsetup ceph-volume - Bug #58591: report "Insufficient space (<5GB)" even when disk size is sufficient ceph-volume - Bug #58812: ceph-volume prepare doesn't use partitions as-is anymore ceph-volume - Bug #58943: ceph-volume's deactivate doesn't close encrypted volumes ceph-volume - Bug #62002: raw list shouldn't list lvm OSDs ceph-volume - Bug #62320: lvm list should filter also on vg name CephFS - Bug #23723: qa: incorporate smallfile workload CephFS - Bug #24403: mon failed to return metadata for mds CephFS - Bug #43393: qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9 CephFS - Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) CephFS - Bug #44916: client: syncfs flush is only fast with a single MDS CephFS - Bug #48673: High memory usage on standby replay MDS CephFS - Bug #48678: client: spins on tick interval CephFS - Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM CephFS - Bug #52280: Mds crash and fails with assert on prepare_new_inode CephFS - Bug #52581: Dangling fs snapshots on data pool after change of directory layout CephFS - Bug #53724: mds: stray directories are not purged when all past parents are clear CephFS - Bug #54557: scrub repair does not clear earlier damage health status CephFS - Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long) CephFS - Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock->get_state() == LOCK_LOCK || lock->get_state() == LOCK_MIX || lock->get_state() == LOCK_MIX_SYNC2) CephFS - Bug #55165: client: validate pool against pool ids as well as pool names CephFS - Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command CephFS - Bug #55464: cephfs: mds/client error when client stale reconnect CephFS - Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison CephFS - Bug #56067: Cephfs data loss with root_squash enabled CephFS - Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB CephFS - Bug #56577: mds: client request may complete without queueing next replay request CephFS - Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish) CephFS - Bug #57071: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients() CephFS - Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure CephFS - Bug #57154: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps CephFS - Bug #57206: ceph_test_libcephfs_reclaim crashes during test CephFS - Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 pending pAsLsXsFs issued pAsLsXsFs, sent 62.303702 seconds ago CephFS - Bug #57641: Ceph FS fscrypt clones missing fscrypt metadata CephFS - Bug #57655: qa: fs:mixed-clients kernel_untar_build failure CephFS - Bug #57682: client: ERROR: test_reconnect_after_blocklisted CephFS - Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some workloads CephFS - Bug #58195: mgr/snap_schedule: catch all exceptions to avoid crashing module CephFS - Bug #58228: mgr/nfs: disallow non-existent paths when creating export CephFS - Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan) CephFS - Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn(rename)) CephFS - Bug #58394: nofail option in fstab not supported CephFS - Bug #58411: mds: a few simple operations crash mds CephFS - Bug #58482: mds: catch damage to CDentry's first member before persisting CephFS - Bug #58619: mds: client evict [-h|--help] evicts ALL clients CephFS - Bug #58645: Unclear error when creating new subvolume when subvolumegroup has ceph.dir.subvolume attribute set to 1 CephFS - Bug #58677: cephfs-top: test the current python version is supported CephFS - Bug #58878: mds: FAILED ceph_assert(trim_to > trimming_pos) CephFS - Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient CephFS - Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client CephFS - Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions CephFS - Bug #58971: mon/MDSMonitor: do not trigger propose on error from prepare_update CephFS - Bug #59067: mds: add cap acquisition throttled event to MDR CephFS - Bug #59107: MDS imported_inodes metric is not updated. CephFS - Bug #59119: mds: segmentation fault during replay of snaptable updates CephFS - Bug #59134: mds: deadlock during unlink with multimds (postgres) CephFS - Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode CephFS - Bug #59169: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) CephFS - Bug #59183: cephfs-data-scan: does not scan_links for lost+found CephFS - Bug #59185: MDSMonitor: should batch propose osdmap/mdsmap changes via some fs commands CephFS - Bug #59188: cephfs-top: cephfs-top -d <seconds> not working as expected CephFS - Bug #59230: Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage) CephFS - Bug #59297: qa: test_join_fs_unset failure CephFS - Bug #59301: pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemons to become healthy CephFS - Bug #59314: mon/MDSMonitor: plug PAXOS when evicting an MDS CephFS - Bug #59318: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beacon twice CephFS - Bug #59332: qa: test_rebuild_simple checks status on wrong file system CephFS - Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed CephFS - Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" CephFS - Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR CephFS - Bug #59394: ACLs not fully supported. CephFS - Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" CephFS - Bug #59425: qa: RuntimeError: more than one file system available CephFS - Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working CephFS - Bug #59514: client: read wild pointer when reconnect to mds CephFS - Bug #59527: qa: run scrub post disaster recovery procedure CephFS - Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2 CephFS - Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output error)" CephFS - Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x' CephFS - Bug #59552: mon: block osd pool mksnap for fs pools CephFS - Bug #59553: cephfs-top: fix help text for delay CephFS - Bug #59569: mds: allow entries to be removed from lost+found directory CephFS - Bug #59582: snap-schedule: allow retention spec to specify max number of snaps to retain CephFS - Bug #59657: qa: test with postgres failed (deadlock between link and migrate straydn(rename)) CephFS - Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file or man page CephFS - Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests CephFS - Bug #59688: mds: idempotence issue in client request CephFS - Bug #59691: mon/MDSMonitor: may lookup non-existent fs in current MDSMap CephFS - Bug #59705: client: only wait for write MDS OPs when unmounting CephFS - Bug #59716: tools/cephfs/first-damage: unicode decode errors break iteration CephFS - Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inodeno_t; C = std::map]: assert(p->first <= start) CephFS - Bug #61148: dbench test results in call trace in dmesg CephFS - Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finishes timesout. CephFS - Bug #61186: mgr/nfs: hitting incomplete command returns same suggestion twice CephFS - Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific CephFS - Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed CephFS - Bug #61357: cephfs-data-scan: parallelize cleanup step CephFS - Bug #61399: qa: build failure for ior CephFS - Bug #61407: mds: abort on CInode::verify_dirfrags CephFS - Bug #61409: qa: _test_stale_caps does not wait for file flush before stat CephFS - Bug #61444: mds: session ls command appears twice in command listing CephFS - Bug #61459: mds: session in the importing state cannot be cleared if an export subtree task is interrupted while the state of importer is acking CephFS - Bug #61523: client: do not send metrics until the MDS rank is ready CephFS - Bug #61574: qa: build failure for mdtest project CephFS - Bug #61627: Mds crash and fails with assert on prepare_new_inode CephFS - Bug #61666: cephfs: print better error message when MDS caps perms are not right CephFS - Bug #61749: mds/MDSRank: op_tracker of mds have slow op alway. CephFS - Bug #61753: Better help message for cephfs-journal-tool -help command for --rank option. CephFS - Bug #61764: qa: test_join_fs_vanilla is racy CephFS - Bug #61775: cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests) CephFS - Bug #61782: mds: cap revoke and cap update's seqs mismatched CephFS - Bug #61790: cephfs client to mds comms remain silent after reconnect CephFS - Bug #61791: snaptest-git-ceph.sh test timed out (job dead) CephFS - Bug #61831: qa: test_mirroring_init_failure_with_recovery failure CephFS - Bug #61864: mds: replay thread does not update some essential perf counters CephFS - Bug #61867: mgr/volumes: async threads should periodically check for work CephFS - Bug #61869: pybind/cephfs: holds GIL during rmdir CephFS - Bug #61879: mds: linkmerge assert check is incorrect in rename codepath CephFS - Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY CephFS - Bug #61945: LibCephFS.DelegTimeout failure CephFS - Bug #61947: mds: enforce a limit on the size of a session in the sessionmap CephFS - Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_object_key_threshold CephFS - Bug #61957: test_client_limits.TestClientLimits.test_client_release_bug fails CephFS - Bug #61958: mds: add debug logs for handling setxattr for ceph.dir.subvolume CephFS - Bug #61967: mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())" CephFS - Bug #61972: cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised CephFS - Bug #61978: cephfs-mirror: support fan out setups CephFS - Bug #61982: Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) CephFS - Bug #62021: mds: unnecessary second lock on snaplock CephFS - Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty()) CephFS - Bug #62052: mds: deadlock when getattr changes inode lockset CephFS - Bug #62057: mds: add TrackedOp event for batching getattr/lookup CephFS - Bug #62058: mds: inode snaplock only acquired for open in create codepath CephFS - Bug #62067: ffsb.sh failure "Resource temporarily unavailable" CephFS - Bug #62077: mgr/nfs: validate path when modifying cephfs export CephFS - Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' CephFS - Bug #62096: mds: infinite rename recursion on itself CephFS - Bug #62114: mds: adjust cap acquistion throttle defaults CephFS - Bug #62123: mds: detect out-of-order locking CephFS - Bug #62126: test failure: suites/blogbench.sh stops running CephFS - Bug #62146: qa: adjust fs:upgrade to use centos_8 yaml CephFS - Bug #62158: mds: quick suspend or abort metadata migration CephFS - Bug #62160: mds: MDS abort because newly corrupt dentry to be committed CephFS - Bug #62164: qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." CephFS - Bug #62187: iozone: command not found CephFS - Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done CephFS - Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields CephFS - Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.TestMirroring) CephFS - Bug #62236: qa: run nfs related tests with fs suite CephFS - Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid` CephFS - Bug #62265: cephfs-mirror: use monotonic clocks in cephfs mirror daemon CephFS - Bug #62277: Error: Unable to find a match: python2 with fscrypt tests CephFS - Bug #62326: pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade CephFS - Bug #62344: tools/cephfs_mirror: mirror daemon logs reports initialisation failure for fs already deleted post test case execution CephFS - Bug #62355: cephfs-mirror: do not run concurrent C_RestartMirroring context CephFS - Bug #62357: tools/cephfs_mirror: only perform actions if init succeed CephFS - Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_version()) CephFS - Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" CephFS - Bug #62484: qa: ffsb.sh test failure CephFS - Bug #62485: quincy (?): pybind/mgr/volumes: subvolume rm timeout CephFS - Bug #62494: Lack of consistency in time format CephFS - Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) CephFS - Bug #62508: qa: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log CephFS - Bug #62510: snaptest-git-ceph.sh failure with fs/thrash CephFS - Bug #62511: src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down) CephFS - Bug #62537: cephfs scrub command will crash the standby-replay MDSs CephFS - Bug #62567: postgres workunit times out - MDS_SLOW_REQUEST in logs CephFS - Bug #62577: mds: log a message when exiting due to asok "exit" command CephFS - Bug #62579: client: evicted warning because client completes unmount before thrashed MDS comes back CephFS - Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails CephFS - Bug #62648: pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete CephFS - Bug #62653: qa: unimplemented fcntl command: 1036 with fsstress CephFS - Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds CephFS - Bug #62663: MDS: inode nlink value is -1 causing MDS to continuously crash CephFS - Bug #62673: cephfs subvolume resize does not accept 'unit' CephFS - Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true CephFS - Bug #62700: postgres workunit failed with "PQputline failed" CephFS - Bug #62702: MDS slow requests for the internal 'rename' requests CephFS - Bug #62720: mds: identify selinux relabelling and generate health warning CephFS - Bug #62739: cephfs-shell: remove distutils Version classes because they're deprecated CephFS - Bug #62763: qa: use stdin-killer for ceph-fuse mounts CephFS - Bug #62764: qa: use stdin-killer for kclient mounts CephFS - Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute CephFS - Bug #62847: mds: blogbench requests stuck (5mds+scrub+snaps-flush) CephFS - Bug #62848: qa: fail_fs upgrade scenario hanging CephFS - Bug #62861: mds: _submit_entry ELid(0) crashed the MDS CephFS - Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail CephFS - Bug #62870: test_nfs task fails due to no orch backend set CephFS - Bug #62873: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) CephFS - Bug #62936: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) CephFS - Bug #62953: qa: fs:upgrade needs updated to upgrade only from N-2, N-1 releases (i.e. reef/quincy) CephFS - Bug #62962: mds: standby-replay daemon crashes on replay CephFS - Bug #62968: mgr/volumes: fix `subvolume group rm` command error message CephFS - Bug #62979: client: queue a delay cap flushing if there are ditry caps/snapcaps CephFS - Bug #63089: qa: tasks/mirror times out CephFS - Bug #63093: mds: `dump dir` command should indicate that a dir is not cached CephFS - Bug #63103: mds: disable delegating inode ranges to clients CephFS - Bug #63104: qa: add libcephfs tests for async calls CephFS - Bug #63120: mgr/nfs: support providing export ID while creating exports using 'nfs export create cephfs' CephFS - Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning CephFS - Bug #63141: qa/cephfs: test_idem_unaffected_root_squash fails CephFS - Bug #63154: fs rename must require FS to be offline and refuse_client_session to be set CephFS - Bug #63166: mon/MDSMonitor: metadata not loaded from PAXOS on update CephFS - Bug #63176: qa: make rank_asok() capable of handling errors from asok commands CephFS - Bug #63188: client: crash during upgrade from octopus to quincy (or from pacific to reef) CephFS - Bug #63212: qa: failed to download ior.tbz2 CephFS - Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS CephFS - Bug #63259: mds: failed to store backtrace and force file system read-only CephFS - Bug #63281: src/mds/MDLog.h: 100: FAILED ceph_assert(!segments.empty()) CephFS - Bug #63301: cephfs-data-scan: may remove newest primary link of inode CephFS - Bug #63364: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid CephFS - Bug #63411: qa: flush journal may cause timeouts of `scrub status` CephFS - Bug #63471: client: error code inconsistency when accessing a mount of a deleted dir CephFS - Bug #63473: fsstressh.sh fails with errno 124 CephFS - Bug #63482: qa: fs/nfs suite needs debug mds/client CephFS - Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined" CephFS - Bug #63514: mds: avoid sending inode/stray counters as part of health warning for standby-replay CephFS - Bug #63516: mds may try new batch head that is killed CephFS - Bug #63519: ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds CephFS - Bug #63538: mds: src/mds/Locker.cc: 2357: FAILED ceph_assert(!cap->is_new()) CephFS - Bug #63614: cephfs-mirror: the peer list/snapshot mirror status always displays only one mon host instead of all CephFS - Bug #63619: client: check for negative value of iovcnt before passing it to internal functions during async I/O CephFS - Bug #63629: client: handle context completion during async I/O call when the client is not mounting CephFS - Bug #63632: client: fh obtained using O_PATH can stall the caller during async I/O CephFS - Bug #63633: client: handle nullptr context in async i/o api CephFS - Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock CephFS - Bug #63648: client: ensure callback is finished if write fails during async I/O CephFS - Bug #63679: client: handle zero byte sync/async write cases CephFS - Bug #63680: qa/cephfs: improvements for name generators in test_volumes.py CephFS - Bug #63685: mds: FAILED ceph_assert(_head.empty()) CephFS - Bug #63697: client: zero byte sync write fails CephFS - Bug #63699: qa: failed cephfs-shell test_reading_conf CephFS - Bug #63700: qa: test_cd_with_args failure CephFS - Bug #63710: client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 pending pAsLsXs issued pAsLsXsFs, sent 30723.964282 seconds ago CephFS - Bug #63713: mds: encode `bal_rank_mask` with a higher version CephFS - Bug #63722: cephfs/fuse: renameat2 with flags has wrong semantics CephFS - Bug #63726: cephfs-shell: support bootstrapping via monitor address CephFS - Bug #63734: client: handle callback when async io fails CephFS - Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread CephFS - Fix #58023: mds: do not evict clients if OSDs are laggy CephFS - Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect CephFS - Fix #61378: mds: turn off MDS balancer by default CephFS - Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load CephFS - Fix #63432: qa: run TestSnapshots.test_kill_mdstable for all mount types CephFS - Feature #7320: qa: thrash directory fragmentation CephFS - Feature #10679: Add support for the chattr +i command (immutable file) CephFS - Feature #18154: qa: enable mds thrash exports tests CephFS - Feature #41824: mds: aggregate subtree authorities for display in `fs top` CephFS - Feature #44279: client: provide asok commands to getattr an inode with desired caps CephFS - Feature #45021: client: new asok commands for diagnosing cap handling issues CephFS - Feature #48509: mds: dmClock based subvolume QoS scheduler CephFS - Feature #48704: mds: recall caps proportional to the number issued CephFS - Feature #55214: mds: add asok/tell command to clear stale omap entries CephFS - Feature #55414: mds:asok interface to cleanup permanently damaged inodes CephFS - Feature #55940: quota: accept values in human readable format as well CephFS - Feature #56428: add command "fs deauthorize" CephFS - Feature #56442: mds: build asok command to dump stray files and associated caps CephFS - Feature #56489: qa: test mgr plugins with standby mgr failover CephFS - Feature #57481: mds: enhance scrub to fragment/merge dirfrags CephFS - Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data CephFS - Feature #58129: mon/FSCommands: support swapping file systems by name CephFS - Feature #58154: mds: add minor segment boundaries CephFS - Feature #58193: mds: remove stray directory indexes since stray directory can fragment CephFS - Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event CephFS - Feature #58550: mds: add perf counter to track (relatively) larger log events CephFS - Feature #58680: libcephfs: clear the suid/sgid for fallocate CephFS - Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes CephFS - Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or gids CephFS - Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available CephFS - Feature #61334: cephfs-mirror: use snapdiff api for efficient tree traversal CephFS - Feature #61595: Consider setting "bulk" autoscale pool flag when automatically creating a data pool for CephFS CephFS - Feature #61599: mon/MDSMonitor: optionally forbid to use standby for another fs as last resort CephFS - Feature #61777: mds: add ceph.dir.bal.mask vxattr CephFS - Feature #61863: mds: issue a health warning with estimated time to complete replay CephFS - Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TRIM or MDS_HEALTH_CACHE_OVERSIZED health warnings CephFS - Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion CephFS - Feature #61904: pybind/mgr/volumes: add more introspection for clones CephFS - Feature #61905: pybind/mgr/volumes: add more introspection for recursive unlink threads CephFS - Feature #61908: mds: provide configuration for trim rate of the journal CephFS - Feature #62086: mds: print locks when dumping ops CephFS - Feature #62157: mds: working set size tracker CephFS - Feature #62207: Report cephfs-nfs service on ceph -s CephFS - Feature #62668: qa: use teuthology scripts to test dozens of clients CephFS - Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite CephFS - Feature #62849: mds/FSMap: add field indicating the birth time of the epoch CephFS - Feature #62856: cephfs: persist an audit log in CephFS CephFS - Feature #62882: mds: create an admin socket command for raising a signal CephFS - Feature #62892: mgr/snap_schedule: restore scheduling for subvols and groups CephFS - Feature #62925: cephfs-journal-tool: Add preventive measures in the tool to avoid corruting a ceph file system CephFS - Feature #63191: tools/cephfs: provide an estimate completion time for offline tools CephFS - Feature #63374: mds: add asok command to kill/respond to request CephFS - Feature #63468: mds/purgequeue: add l_pq_executed_ops counter CephFS - Feature #63544: mgr/volumes: bulk delete canceled clones CephFS - Feature #63663: mds,client: add crash-consistent snapshot support CephFS - Feature #63664: mds: add quiesce protocol for halting I/O on subvolumes CephFS - Feature #63665: mds: QuiesceDb to manage subvolume quiesce state CephFS - Feature #63666: mds: QuiesceAgent to execute quiesce operations on an MDS rank CephFS - Feature #63667: client,libcephfs,cephfs.pyx: add quiesce protocol CephFS - Feature #63668: pybind/mgr/volumes: add quiesce protocol API CephFS - Feature #63670: mds,client: add light-weight quiesce protocol CephFS - Cleanup #4744: mds: pass around LogSegments via std::shared_ptr CephFS - Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces CephFS - Tasks #62159: qa: evaluate mds_partitioner CephFS - Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes CephFS - Documentation #61375: doc: cephfs-data-scan should discuss multiple data support CephFS - Documentation #61377: doc: add suggested use-cases for random emphemeral pinning CephFS - Documentation #61865: add doc on how to expedite MDS recovery with a lot of log segments CephFS - Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases CephFS - Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank cephsqlite - Bug #55606: [ERR] Unhandled exception from module ''devicehealth'' while running on mgr.y: unknown cephsqlite - Bug #56239: crash: File "mgr/devicehealth/module.py", in get_recent_device_metrics: return self._get_device_metrics(devid, min_sample=min_sample) cephsqlite - Bug #62492: libcephsqlite: short reads fill 0s at beginning of buffer crimson - Feature #61417: Zoned Block Devices (ZNS) support devops - Bug #63196: compilation fails from git main on ppc64le with missing symbols Linux kernel client - Bug #59684: Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) Linux kernel client - Bug #59735: fs/ceph: cross check passed in fsid during mount with cluster fsid Linux kernel client - Bug #61332: dbench test results in call trace in dmesg Linux kernel client - Bug #62081: tasks/fscrypt-common does not finish, timesout mgr - Bug #58832: ceph-mgr package installation fails on centos 9 mgr - Bug #58924: mgr: block register_client on new MgrMap mgr - Bug #59580: memory leak (RESTful module, maybe others?) mgr - Bug #61572: mgr: remove invalid zero performance counter mgr - Bug #61837: qa: test_progress fails with osds full (?) mgr - Bug #61874: mgr: DaemonServer::ms_handle_authentication acquires daemon locks mgr - Bug #61942: The throttle parameter of osd does not take effect for mgr mgr - Bug #62385: mgr/status: wrong kb_used and kb_avail units in command `ceph osd status` results mgr - Bug #62641: mgr/(object_format && nfs/export): enhance nfs export update failure response mgr - Bug #62659: mgr/nfs: report actual errno instead of EIO for single export update failure mgr - Bug #63195: mgr: remove out&down osd from mgr daemons to avoid warnings mgr - Bug #63433: devicehealth: sqlite3.IntegrityError: UNIQUE constraint failed: DeviceHealthMetrics.time, DeviceHealthMetrics.devid mgr - Bug #63615: mgr: consider raising priority of MMgrBeacon mgr - Feature #62884: audit: create audit module which persists in RADOS important operations performed on the cluster mgr - Cleanup #63294: mgr: enable per-subinterpreter GIL (Python >= 3.12) mgr - Documentation #52656: mgr/prometheus: wron unit in RBD latency metric description Dashboard - Bug #61690: mgr/dashboard: install_deps.sh fails on vanilla CentOS 8 Stream Dashboard - Bug #61844: mgr/dashboard: dashboard thread abort Dashboard - Bug #63287: mgr/dashboard: Unable to set max objects under user quota for an user Dashboard - Bug #63469: mgr/dashboard: fix rgw multi-site import form helper Dashboard - Bug #63608: mgr/dashboard: cephfs rename only works when fs is offline Dashboard - Feature #59328: mgr/dashboard: add support for editing RGW zone Dashboard - Cleanup #58961: mgr/dashboard: remove old dashboard (dashboard v3) Dashboard - Cleanup #58973: mgr/dashboard: RGW 404 shouldn't trigger log exceptions Orchestrator - Bug #59529: cluster upgrade stuck with OSDs and MDSs not upgraded. Orchestrator - Bug #63388: mgr: discovery service (port 8765) fails if ms_bind ipv6 only Orchestrator - Bug #63561: cephadm: build time install of dependencies fails on build systems that disable network Orchestrator - Feature #63224: [RFE] Add an alert for swap space usage RADOS - Bug #48750: ceph config set using osd/host mask not working RADOS - Bug #58972: mon/OSDMonitor: do not propose on error in prepare_update RADOS - Bug #58974: mon/MonmapMonitor: do not propose on error in prepare_update RADOS - Bug #59042: mon/AuthMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59043: mon/ConfigMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59044: mon/HealthMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59045: mon/KVMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59046: mon/LogMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59047: mon/MgrStatMonitor: do not erroneously propose on error in ::prepare_update RADOS - Bug #59813: crash: void PaxosService::propose_pending(): assert(have_pending) RADOS - Bug #61912: mgr hang when purging OSDs RADOS - Bug #62076: reef: Test failure: test_grow_shrink (tasks.cephfs.test_failover.TestMultiFilesystems) RADOS - Bug #62382: mon/MonClient: ms_handle_fast_authentication return value ignored RADOS - Bug #62578: mon: osd pg-upmap-items command causes PG_DEGRADED warnings RADOS - Bug #62832: common: config_proxy deadlock during shutdown (and possibly other times) RADOS - Bug #63520: the usage of osd_pg_stat_report_interval_max is not uniform RADOS - Bug #63609: osd acquire map_cache_lock high latency RADOS - Bug #63658: OSD trim_maps - possible too slow lead to using too much storage space RADOS - Bug #63727: LogClient: do not output meaningless logs by default RADOS - Feature #59727: The libradosstriper interface provides an optional parameter to avoid shared lock when reading data rgw - Bug #43221: rgw: GET Bucket fails on renamed bucket on archive zone rgw - Bug #44660: Multipart re-uploads cause orphan data rgw - Bug #45736: rgw: lack of headers in 304 response rgw - Bug #61369: [reef] RGW crashes when replication rules are set using PutBucketReplication S3 API rgw - Bug #61629: rgw: add support http_date if http_x_amz_date is missing for sigv4 rgw - Bug #62013: Object with null version when using versioning and transition rgw - Bug #62075: New radosgw-admin commands to cleanup leftover OLH index entries and unlinked instance objects rgw - Bug #62250: retry metadata cache notifications with INVALIDATE_OBJ rgw - Bug #62681: high virtual memory consumption when dealing with Chunked Upload rgw - Bug #62737: RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage rgw - Bug #62875: SignatureDoesNotMatch when extra headers start with 'x-amzn' rgw - Bug #62938: RGW s3website API prefetches data for range requests rgw - Bug #63004: CVE-2023-43040 - Improperly verified POST keys. rgw - Bug #63130: cmake: __FORTIFY_SOURCE requires compiling with optimization (-O) rgw - Bug #63170: rgw: indexless buckets can crash radosgw during sync threads rgw - Bug #63245: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv rgw - Bug #63360: rgw: rgw-restore-bucket-index does not restore some versioned instance entries rgw - Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64 rgw - Bug #63740: rgwlc: lock_lambda overwrites ret val rgw - Feature #61701: Support localise Read for RGW rgw - Feature #62474: rgw: add versioning status during `radosgw-admin bucket stats`
v12.2.14 63% 27 issues (17 closed — 10 open) Related issues Bug #45670: luminous: osd: too many store transactions when osd got an incremental osdmap but failed encode full with correct crc again and again CephFS - Bug #49503: standby-replay mds assert failed when replay mgr - Bug #49408: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #45698: PrioritizedQueue: messages in normal queue RADOS - Bug #47204: ceph osd getting shutdown after joining to cluster RADOS - Bug #48505: osdmaptool crush RADOS - Bug #48855: OSD_SUPERBLOCK Checksum failed after node restart RADOS - Bug #49409: osd run into dead loop and tell slow request when rollback snap with using cache tier RADOS - Bug #49448: If OSD types are changed, pools rules can become unresolvable without providing health warnings rgw - Bug #45154: the command "radosgw-admin orphans list-jobs" failed
v13.2.11 83% 6 issues (5 closed — 1 open) Related issues RADOS - Bug #47626: process will crash by invalidate pointer rbd - Bug #48999: Data corruption with rbd_balance_parent_reads and rbd_balance_snap_reads set to true.
v14.2.23 50% 38 issues (18 closed — 20 open) Related issues Bug #54189: multisite: metadata sync will skip first child of pos_to_prev Bug #55461: ceph osd crush swap-bucket {old_host} {new_host} where {old_host}={new_host} crashes monitors Bug #56554: rgw::IAM::s3GetObjectTorrent never take effect Bug #57221: ceph warn (important) Bug #63337: monmap's features are sometimes 0 Bug #63429: librbd: mirror snapshot remove same snap_id twice Feature #55166: disable delte bucket from rgw bluestore - Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device ceph-volume - Bug #52340: ceph-volume: lvm activate: "tags" not defined ceph-volume - Bug #53136: The capacity used by the ceph cache layer pool exceeds target_max_bytes CephFS - Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1 mgr - Bug #51637: mgr/insights: mgr consumes excessive amounts of memory RADOS - Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command RADOS - Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update RADOS - Bug #55424: ceph-mon process exit in dead status , which backtrace displayed has blocked by compact_queue_thread rbd - Bug #54027: The file system takes a long time to build with iscsi disk of rbd rgw - Bug #53431: When using radosgw-admin to create a user, when the uid is empty, the error message is unreasonable rgw - Bug #53668: Why not add a xxx.retry obJ to metadata synchronization at multisite for exception retries rgw - Bug #53708: ceph multisite sync deleted unversioned object failed rgw - Bug #53745: crash on null coroutine under RGWDataSyncShardCR::stop_spawned_services rgw - Bug #54254: when the remove-all parameter of rgw admin operation trim usage interface is set false, the usage is trimmed. rgw - Bug #55131: radosgw crashes at RGWIndexCompletionManager::create_completion rgw - Bug #58105: `DeleteBucketPolicy` can not delete policy in slave zonegroup rgw - Bug #58721: rgw_rename lead to librgw.so segment fault rgw - Bug #61817: Ceph swift error: create container return 404; rgw - Feature #53455: [RFE] Ill-formatted JSON response from RGW
v16.2.15 57% 177 issues (101 closed — 76 open) Related issues Bug #63327: compiler cython error Bug #63345: install_dep.sh error Bug #63493: Problem with Pgs Deep-scrubbing ceph CephFS - Bug #61732: pacific: test_cluster_info fails from "No daemons reported" rbd - Bug #62586: TestClsRbd.mirror_snapshot failure in pacific p2p rgw - Bug #63177: RGW user quotas is not honored when bucket owner is different than uploader rgw - Bug #63642: rgw: rados objects wronly deleted
RADOS - v17.2.4 50% 4 issues (2 closed — 2 open) Related issues RADOS - Bug #58410: Set single compression algorithm as a default value in ms_osd_compression_algorithm instead of list of algorithms
RADOS - v17.2.6 75% 4 issues (3 closed — 1 open) Related issues RADOS - Bug #62872: ceph osd_max_backfills default value is 1000
v17.2.7 88% 100 issues (88 closed — 12 open) Related issues Bug #62281: Osd didn't set correct numa because interface was not found. Bug #62493: get_iface_numa_node does not work for vlan Bug #62528: Runtime Error: cephadmin version Bug #62615: GCC 11 with -Warray-bounds has multiple warnings Bug #63503: data corruption after rbd migration Bug #63543: rgw going down upon executing s3select-request bluestore - Bug #63239: Corrupted rocksdb WAL log after power cut ceph-volume - Bug #63391: OSDs fail to be created on PVs or LVs in v17.2.7 due to failure in ceph-volume raw list mgr - Bug #63735: Module 'pg_autoscaler' has failed: float division by zero Dashboard - Bug #63357: quincy: mgr/dashboard: disable dashboard v3 in quincy Orchestrator - Bug #61876: Error bootstraping first cluster node Orchestrator - Bug #63101: cephadm seems to be trying to pull images named after daemons Orchestrator - Bug #63171: Customize haproxy config when using ingress rbd - Bug #62402: TestClsRbd.mirror_snapshot failure in octopus-x-quincy rbd - Bug #62773: TestClsRbd.mirror_snapshot failure in quincy p2p rgw - Bug #61559: RGW crash upon parsing a TPCDS query (Trino push-down) rgw - Bug #61561: Trino does not stop the statement processing upon a failure of the s3select-engine rgw - Bug #61636: missing SQL syntax construct in s3select rgw - Bug #61637: s3select: a backtick(`) on a constant date is converted into timestamp - on parsing time rgw - Bug #61638: RGW crashed on getObject::range-request for some of the TPCDS queries rgw - Bug #61710: quincy/pacific: PUT requests during reshard of versioned bucket fail with 404 and leave behind dark data rgw - Bug #61769: Bulk upload feature not working rgw - Bug #61882: rgw: nothing provides libthrift-0.14.0.so rgw - Bug #61955: S3 metadata with dot . in the key AccessDenied rgw - Bug #62000: rgw crashed on latest ceph version 17.2.6 quincy
v18.1.0 reef rc0 64% 11 issues (7 closed — 4 open) Related issues Bug #62293: osd mclock QoS : osd_mclock_scheduler_client_lim is not limited CephFS - Bug #23724: qa: broad snapshot functionality testing across clients CephFS - Bug #57248: qa: mirror tests should cleanup fs during unwind mgr - Bug #61669: ceph-exporter scrapes failing on multi-homed server Dashboard - Bug #63122: dashboard crash when opening rbd view Orchestrator - Bug #62896: Need cephadm support/ command to remove nvmeof service RADOS - Bug #61718: linking failure on alpine linux only for x86_64 RADOS - Bug #62512: osd msgr-worker high cpu 300% due to throttle-osd_client_messages get_or_fail_fail (osd_client_message_cap=256) nvme-of - Bug #62895: Need cephadm support/ command to remove nvmeof service rgw - Feature #61808: Multisite: Server Side Copy for replication pulls the object from primary zone instead of copying from secondary zone bucket
v18.2.0 Reef 36% 14 issues (5 closed — 9 open) Related issues Fix #62488: build ceph on a system with several gcc versions CephFS - Bug #62664: ceph-fuse: failed to remount for kernel dentry trimming; quitting! Linux kernel client - Bug #62604: write() hangs forever in ceph_get_caps mgr - Bug #63216: Regression of #44948 ("ModuleNotFoundError: No module named 'sklearn'") Dashboard - Bug #62735: determining SSL port for RGW dashboard by splitting frontend config Orchestrator - Bug #63540: Cephadm did not automatically modify firewall rules to allow access to port 9926 of Ceph exporter RADOS - Bug #62812: osd: Is it necessary to unconditionally increase osd_bandwidth_cost_per_io in mClockScheduler::calc_scaled_cost? RADOS - Bug #62836: CEPH zero iops after upgrade to Reef and manual read balancer rgw - Bug #62771: policy array empty on rgw swift /info in reef rgw - Bug #62808: Buckets mtime equal to creation time
v18.2.1 90% 61 issues (55 closed — 6 open) Related issues Bug #62545: cephfs-shell: getxattr fail while the xattr's length > 256 Bug #63402: build: more cmake race conditions related to global_legacy_options.h Bug #63517: Process (rbd) crashed in handle_oneshot_fatal_signal Feature #62966: Add metric for providing OSD fragmentation level Feature #63343: Add fields to ceph-nvmeof conf and fix cpumask default Documentation #62354: docs: lack of Reef in Platforms ABC tests ceph-ansible - Bug #62980: RGW/Swift RBAC not supported in Reef+ RADOS - Bug #62833: [Reads Balancer] osdmaptool with with --read option creates suggestions for primary OSD change even when it's already primary for that PG rgw - Bug #62746: rgw: java_s3tests fails on ObjectTest.testObjectCreateBadMd5InvalidShort rgw - Bug #62747: rgw: crash during test_encryption_sse_c_method_head
v18.2.2 0% 3 issues (0 closed — 3 open) Related issues Bug #63617: ceph-common: CommonSafeTimer<std::mutex>::timer_thread(): python3.12 killed by SIGSEGV rgw - Bug #63613: [rgw][lc] using custom lc schedule (work time) may cause lc processing to stall