Ceph - v16.0.0 Pacific 75% 805 issues (601 closed — 204 open) Related issues Bug #37725: mds: stopping MDS with subtrees pinnned cannot finish stopping Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket Bug #41133: qa/tasks: update thrasher design Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash Bug #41541: mgr/volumes: ephemerally pin volumes Bug #41565: mds: detect MDS<->MDS messages that are not versioned Bug #42271: client: ceph-fuse which had been blacklisted couldn't auto reconnect after cluster unblacklisted it. Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn) Bug #42724: pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py Bug #43039: client: shutdown race fails with status 141 Bug #43061: ceph fs add_data_pool doesn't set pool metadata properly Bug #43191: test_cephfs_shell: set `colors` to Never for cephfs-shell Bug #43248: cephfs-shell: do not drop into shell after running command-line command Bug #43493: osdc: fix null pointer caused program crash Bug #43517: qa: random subvolumegroup collision Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and report damage Bug #43598: mds: PurgeQueue does not handle objecter errors Bug #43761: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not give the necessary right anymore Bug #43817: mds: update cephfs octopus feature bit Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds" Bug #44113: cephfs-shell: set proper return value for the tool Bug #44127: cephfs-shell: read config options from cephf.conf and from ceph config command Bug #44172: cephfs-journal-tool: cannot set --dry_run arg Bug #44276: pybind/mgr/volumes: cleanup stale connection hang Bug #44288: MDSMap encoder "ev" (extended version) is not checked for validity when decoding Bug #44386: qa: blogbench cleanup hang/stall Bug #44389: client: fuse mount will print call trace with incorrect options Bug #44408: qa: after the cephfs qa test case quit the mountpoints still exist Bug #44415: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in LibCephFS.__cinit__ Bug #44437: qa:test_config_session_timeout failed with incorrect options Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes.TestVolumes) Bug #44448: mds: 'if there is lock cache on dir' check is buggy Bug #44579: qa: commit 9f6c764f10f break qa code in several places Bug #44638: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubControls) fails intermittently Bug #44645: cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E128 and E122) Bug #44657: cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and E222) Bug #44677: stale scrub status entry from a failed mds shows up in `ceph status` Bug #44771: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_do_remount Bug #44785: non-head batch requests may hold authpins and locks Bug #44801: client: write stuck at waiting for larger max_size Bug #44904: CephFSMount::run_shell does not run command with sudo Bug #44963: fix MClientCaps::FLAG_SYNC in check_caps Bug #45024: mds: wrong link count under certain circumstance Bug #45071: cephfs-shell: CI testing does not detect flake8 errors Bug #45090: mds: inode's xattr_map may reference a large memory. Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage) Bug #45104: NFS deployed using orchestrator watch_url not working and mkdirs permission denied dashboard Bug #45114: client: make cache shrinking callbacks available via libcephfs Bug #45141: some obsolete "ceph mds" sub commands are suggested by bash completion Bug #45261: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking Bug #45300: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword argument 'mountpoint' Bug #45304: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent Bug #45332: qa: TestExports is failure under new Python3 runtime Bug #45339: qa/cephfs: run nsenter commands with superuser privileges Bug #45342: qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate /sys/ after 31 seconds Bug #45349: mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..) Bug #45373: cephfs-shell: OSError type exceptions throw object has no attribute 'get_error_code' Bug #45387: qa: install task runs twice with double unwind causing fatal errors Bug #45396: ceph-fuse: building the source code failed with libfuse3.5 or higher versions Bug #45398: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume create command Bug #45425: qa/cephfs: mount.py must use StringIO instead of BytesIO Bug #45430: qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS was not mounted Bug #45446: vstart_runner.py: using python3 leads to TypeError: unhashable type: 'Raw' Bug #45459: qa/task/cephfs/mount.py: Error: Connection activation failed: Activation failed because the device is unmanaged Bug #45521: mds: layout parser does not handle [-.] in pool names Bug #45524: ceph-fuse: the -d option couldn't enable the debug mode in libfuse Bug #45530: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|/usr/libexec', ...] Bug #45552: qa/task/vstart_runner.py: admin_socket: exception getting command descriptions: [Errno 111] Connection refused Bug #45553: mds: rstats on snapshot are updated by changes to HEAD Bug #45575: cephfs-journal-tool: incorrect read_offset after finding missing objects Bug #45590: qa: TypeError: unsupported operand type(s) for +: 'range' and 'range' Bug #45593: qa: removing network bridge appears to cause dropped packets Bug #45662: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete Bug #45665: client: fails to reconnect to MDS Bug #45666: qa: AssertionError: '1' != b'1' Bug #45699: mds may start to fragment dirfrag before rollback finishes Bug #45723: vstart_runner: LocalFuseMount.mount should set set.mounted to True Bug #45740: mgr/nfs: Check cluster exists before creating exports and make exports persistent Bug #45744: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID Bug #45745: mgr/nfs: Move enable pool to cephadm Bug #45749: client: num_caps shows number of caps received Bug #45806: qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt.0" failed: Invalid argument Bug #45815: vstart_runner.py: set stdout and stderr to None by default Bug #45817: qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr add 192.168.255.254/16 brd 192.168.255.255 dev ceph-brx'] Bug #45829: fs: ceph_test_libcephfs abort in TestUtime Bug #45835: mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory Bug #45866: ceph-fuse build failure against libfuse v3.9.1 Bug #45910: pybind/mgr/volumes: volume deletion not always removes the associated osd pools Bug #45935: mds: cap revoking requests didn't success when the client doing reconnection Bug #45971: vstart: set $CEPH_CONF when calling ganesha-rados-grace commands Bug #46023: mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm) Bug #46025: client: release the client_lock before copying data in read Bug #46042: mds: EMetablob replay too long will cause mds restart Bug #46046: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS) Bug #46057: qa/cephfs: run_as_user must args list instead of str Bug #46068: qa/tasks/cephfs/nfs: AssertionError in test_export_create_and_delete Bug #46079: handle multiple ganesha.nfsd's appropriately in vstart.sh Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr Bug #46100: vstart_runner.py: check for Raw instance before treating as iterable Bug #46101: qa: set omit_sudo to False for cmds executed with sudo Bug #46104: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS) Bug #46129: mds: fix hang issue when accessing a file under a lost parent directory Bug #46158: pybind/mgr/volumes: Persist snapshot size on snapshot creation Bug #46163: mgr/volumes: Clone operation uses source subvolume root directory mode and uid/gid values for the clone, instead of sourcing it from the snapshot Bug #46167: pybind/mgr/volumes: xlist.h: 144: FAILED ceph_assert((bool)_front == (bool)_size) Bug #46213: qa: pjd test reports odd EIO errors Bug #46269: ceph-fuse: ceph-fuse process is terminated by the logratote task and what is more serious is that one Uninterruptible Sleep process will be produced Bug #46273: mds: deleting a large number of files in a directory causes the file system to read only Bug #46277: pybind/mgr/volumes: get_pool_names may indicate volume does not exist if multiple volumes exist Bug #46278: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of snapshot source directory tree Bug #46282: qa: multiclient connection interruptions by stopping one client Bug #46302: mds: optimize ephemeral rand pin Bug #46355: client: directory inode can not call release_callback Bug #46360: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors Bug #46420: cephfs-shell: Return proper error code instead of 1 Bug #46426: mds: 8MMDSPing is not an MMDSOp type Bug #46434: osdc: FAILED ceph_assert(bh->waitfor_read.empty()) Bug #46496: pybind/mgr/volumes: subvolume operations throw exception if volume doesn't exist Bug #46533: mds: null pointer dereference in MDCache::finish_rollback Bug #46543: mds forwarding request 'no_available_op_found' Bug #46565: mgr/nfs: Ensure pseudoroot path is absolute and is not just / Bug #46572: mgr/nfs: help for "nfs export create" and "nfs export delete" says "<attach>" where the documentation says "<clusterid>" Bug #46579: mgr/nfs: Remove NParts and Cache_Size from MDCACHE block Bug #46583: mds slave request 'no_available_op_found' Bug #46597: qa: Fs cleanup fails with a traceback Bug #46608: qa: thrashosds: log [ERR] : 4.0 has 3 objects unfound and apparently lost Bug #46616: client: avoid adding inode already in the caps delayed list Bug #46664: client: in _open() the open ref maybe decreased twice, but only increases one time Bug #46733: Erro:EEXIST returned while unprotecting a snap which is not protected Bug #46765: mds: segv in MDCache::wait_for_uncommitted_fragments Bug #46766: mds: memory leak during cache drop Bug #46830: mds: do not raise "client failing to respond to cap release" when client working set is reasonable Bug #46832: client: static dirent for readdir is not thread-safe Bug #46868: client: switch to use ceph_mutex_is_locked_by_me always Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'cephfs' Bug #46883: kclient: ghost kernel mount Bug #46891: mds: kcephfs parse dirfrag's ndist is always 0 Bug #46905: client: cluster [WRN] evicting unresponsive client smithi122:0 (34373), after 304.762 seconds Bug #46906: mds: fix file recovery crash after replaying delayed requests Bug #46926: mds: fix the decode version Bug #46976: After restarting an mds, its standy-replay mds remained in the "resolve" state Bug #46984: mds: recover files after normal session close Bug #46985: common: validate type CephBool cause 'invalid command json' Bug #46988: mds: 'forward loop' when forward_all_requests_to_auth is set Bug #47006: mon: required client features adding/removing Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt' Bug #47011: client: Client::open() pass wrong cap mask to path_walk Bug #47015: mds: decoding of enum types on big-endian systems broken Bug #47033: client: inode ref leak Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0) Bug #47125: mds: fix possible crash when the MDS is stopping Bug #47140: mgr/volumes: unresponsive Client::abort_conn() when cleaning stale libcephfs handle Bug #47154: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes Bug #47182: mon: deleting a CephFS and its pools causes MONs to crash Bug #47201: mds: CDir::_omap_commit(int): Assertion `committed_version == 0' failed. Bug #47202: qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log Bug #47224: various quota failures Bug #47268: pybind/snap_schedule: scheduled snapshots get pruned just after creation Bug #47293: client: osdmap wait not protected by mounted mutex Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap Bug #47307: mds: throttle workloads which acquire caps faster than the client can release Bug #47353: mds: purge_queue's _calculate_ops is inaccurate Bug #47423: volume rm throws Permissioned denied error Bug #47444: crash in FSMap::parse_role Bug #47512: mgr/nfs: Cluster creation throws 'NoneType' object has no attribute 'replace' error in rook Bug #47518: qa: spawn MDS daemons before creating file system Bug #47526: qa: RuntimeError: FSCID 2 not in map Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" Bug #47591: TestNFS: test_exports_on_mgr_restart: command failed with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.77:/cephfs /mnt' Bug #47662: mds: try to replicate hot dir to restarted MDS Bug #47689: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive client Bug #47734: client: hang after statfs Bug #47783: mgr/nfs: Pseudo path prints wrong error message Bug #47786: mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2 Bug #47798: pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str' for errno ETIMEDOUT Bug #47806: mon/MDSMonitor: divide mds identifier and mds real name with dot Bug #47833: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_session(Session*)' Bug #47842: qa: "fsstress.sh: line 16: 28870 Bus error (core dumped) "$BIN" -d "$T" -l 1 -n 1000 -p 10 -v" Bug #47844: mds: only update the requesting metrics Bug #47854: some clients may return failure in the scenario where multiple clients create directories at the same time Bug #47881: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cannot enter the "failed" state Bug #47918: cephfs client and nfs-ganesha have inconsistent reference count after release cache Bug #47973: Clang does not see names as variables in lambda lists Bug #47981: mds: count error of modified dentries Bug #48076: client: ::_read fails to advance pos at EOF checking Bug #48147: qa: vstart_runner crashes when run with kernel client Bug #48202: libcephfs allows calling ftruncate on a file open read-only Bug #48203: qa: quota failure Bug #48206: client: fix crash when doing remount in none fuse case Bug #48207: qa: switch to 'osdop_read' instead of 'op_r' for test_readahead Bug #48242: qa: add debug information for client address for kclient Bug #48249: mds: dir->mark_new should together with dir->mark_dirty Bug #48313: client: ceph.dir.entries does not acquire necessary caps Bug #48318: Client: the directory's capacity will not be updated after write data into the directory Bug #48365: qa: ffsb build failure on CentOS 8.2 Bug #48403: mds: fix recall defaults based on feedback from production clusters Bug #48447: vstart_runner: fails to print final result line Bug #48491: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch Bug #48501: pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot listing Bug #48514: mgr/nfs: Don't prefix 'ganesha-' to cluster id Bug #48517: mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())" Bug #48555: pybind/ceph_volume_client: allows authorize on auth_ids not created through ceph_volume_client Bug #48633: qa: tox failures Bug #48661: mds: reserved can be set on feature set Bug #48701: pybind/cephfs: MCommand message is constructed with command separated into chars Bug #48702: qa: fwd_scrub should only scrub rank 0 Bug #48707: client: unmount() doesn't dump the cache Bug #48753: mds: spurious wakeups in cache upkeep Bug #48756: qa: kclient does not synchronously write with O_DIRECT Bug #48757: qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f" Bug #48765: have mount helper pick appropriate mon sockets for ms_mode value Bug #48770: qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)" Bug #48808: mon/MDSMonitor: `fs rm` is not idempotent Bug #48811: qa: fs/snaps/snaptest-realm-split.sh hang Bug #48834: qa: MDS_SLOW_METADATA_IO with osd thrasher Bug #48839: qa: Error: Unable to find a match: cephfs-top Bug #48923: pacific: pybind: revert removal of ceph_volume_client library Fix #15134: multifs: test case exercising mds_thrash for multiple filesystems Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1 Fix #46070: client: fix snap directory atime Fix #46645: librados|libcephfs: use latest MonMap when creating from CephContext Fix #46696: mds: pre-fragment distributed ephemeral pin directories to distribute the subtree bounds Fix #46727: mds/CInode: Optimize only pinned by subtrees check Fix #46851: qa: add debugging for volumes plugin use of libcephfs Fix #47149: pybind/mgr/volumes: add debugging for global lock Fix #47983: mds: use proper gather for inode commit ops Fix #48053: qa: update test_readahead to work with the kernel Fix #48121: qa: merge fs/multimds suites Feature #20: client: recover from a killed session (w/ blacklist) Feature #12274: mds: start forward scrubs from all subtree roots, skip non-auth metadata Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL Feature #15070: mon: client: multifs: auth caps on client->mon connections to limit their access to MDSMaps by FSCID Feature #17856: qa: background cephfs forward scrub teuthology task Feature #22477: multifs: remove multifs experimental warnings Feature #24285: mgr: add module which displays current usage of file system (`fs top`) Feature #24461: cephfs: improve file create performance buffering file unlink/create operations Feature #26996: cephfs: get capability cache hits by clients to provide introspection on effectiveness of client caching Feature #36253: cephfs: clients should send usage metadata to MDSs for administration/monitoring Feature #38951: client: implement asynchronous unlink/create Feature #40401: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvolume groups Feature #40681: mds: show total number of opened files beneath a directory Feature #40929: pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure MDSs in response to degraded file system Feature #41072: scheduled cephfs snapshots (via ceph manager) Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target Feature #41074: pybind/mgr/volumes: mirror (scheduled) snapshots to remote target Feature #41302: mds: add ephemeral random and distributed export pins Feature #42451: mds: add root_squash Feature #42831: mds: add config to deny all client reconnects Feature #43423: mds: collect and show the dentry lease metric Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing Feature #44191: cephfs: geo-replication Feature #44192: mds: stable multimds scrub Feature #44193: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in exporting subvolumes Feature #44211: mount.ceph: stop printing warning message about mds_namespace Feature #44277: pybind/mgr/volumes: add command to return metadata regarding a subvolume Feature #44279: client: provide asok commands to getattr an inode with desired caps Feature #44928: mgr/volumes: evict clients based on auth ID and subvolume mounted Feature #44931: mgr/volumes: get the list of auth IDs that have been granted access to a subvolume using mgr/volumes CLI Feature #45237: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot Feature #45267: ceph-fuse: Reduce memory copy in ceph-fuse during data IO Feature #45289: mgr/volumes: create fs subvolumes with isolated RADOS namespaces Feature #45371: mgr/volumes: `protect` and `clone` operation in a single transaction Feature #45729: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume Feature #45741: mgr/volumes/nfs: Add interface for get and list exports Feature #45742: mgr/nfs: Add interface for listing cluster Feature #45743: mgr/nfs: Add interface to show cluster information Feature #45746: mgr/nfs: Add interface to update export Feature #45747: pybind/mgr/nfs: add interface for adding user defined configuration Feature #45830: vstart: Support deployment of ganesha daemon by cephadm with NFS option Feature #45906: mds: make threshold for MDS_TRIM warning configurable Feature #46041: mds/metric: if client send the metrics to old ceph, the mds session connection will be closed by ceph Feature #46059: vstart_runner.py: optionally rotate logs between tests Feature #46074: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume Feature #46432: cephfs-mirror: manager module interface to add/remove directory snapshots Feature #46866: kceph: add metric for number of pinned capabilities Feature #46892: pybind/mgr/volumes: Make number of cloner threads configurable Feature #46989: pybind/mgr/nfs: Test mounting of exports created with nfs export command Feature #47102: mds: add perf counter for cap messages Feature #47148: mds: get rid of the mds_lock when storing the inode backtrace to meta pool Feature #47161: mds: add dedicated field to inode for fscrypt context Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt Feature #47168: client: support getting ceph.dir.rsnaps vxattr Feature #47490: Integration of dashboard with volume/nfs module Feature #47587: pybind/mgr/nfs: add Rook support Feature #48246: client: dump which fs is used by client for multiple-fs Feature #48337: client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs Feature #48602: `cephfs-top` frontend utility Feature #48622: mgr/nfs: Add tests for readonly exports Feature #48704: mds: recall caps proportional to the number issued Feature #48791: mds: support file block size Cleanup #23718: qa: merge fs/kcephfs suites Cleanup #45525: qa/task/cephfs/mount.py: skip saving/restoring the previous value for ip_forward Cleanup #46618: client: clean up the fuse client code Cleanup #46620: client: add command_lock support Cleanup #47160: qa/tasks/cephfs: Break up test_volumes.py Cleanup #47325: client: remove unneccessary client_lock for objector->write() Cleanup #48235: client: do not unset the client_debug_inject_tick_delay in libcephfs Tasks #46649: client: make the 'mounted', 'unmounting' and 'initialized' members a single 'state' member Tasks #46682: client: add timer_lock support Tasks #46768: client: clean up the unnecessary client_lock for _conf->client_trace Tasks #46890: client: add request lock support Tasks #47047: client: release the client_lock before copying data in all the reads Documentation #43028: doc: cephfs-shell options Documentation #44788: cephfs-shell: Missing documentation of quota, df and du Documentation #46449: mgr/nfs: Update nfs-ganesha package requirements Documentation #46571: mgr/nfs: Update about nfs ganesha cluster deployment using cephadm in vstart Documentation #46884: pybind/mgr/mds_autoscaler: add documentation Documentation #47784: nfs: Remove doc on creating cephfs exports using rook Documentation #48010: doc: document MDS recall configurations Documentation #48531: doc/cephfs: "ceph fs new" command is, ironically, old. The new (correct as of Dec 2020) command is "ceph fs add_data_pool" Documentation #48585: mds_cache_trim_decay_rate misnamed? Documentation #48731: mgr/nfs: Add info related to rook, clarify pseudo path and dashboard export warning Documentation #48838: document ms_mode options in mount.ceph manpage
Ceph - v17.0.0 Q 2% 122 issues (1 closed — 121 open) Related issues Bug #20597: mds: tree exports should be reported at a higher debug level Bug #36273: qa: add background task for some units which drops MDS cache Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing Bug #36593: qa: quota failure caused by clients stepping on each other Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth()) Bug #39651: qa: test_kill_mdstable fails unexpectedly Bug #40159: mds: openfiletable prefetching large amounts of inodes lead to mds start failure Bug #40197: The command 'node ls' sometimes output some incorrect information about mds. Bug #41327: mds: dirty rstat lost during scatter-gather process Bug #42688: Standard CephFS caps do not allow certain dot files to be written Bug #43393: qa: add testing for cephfs-shell on CentOS 8 Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclusive file lock case) Bug #43902: qa: mon_thrash: timeout "ceph quorum_status" Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery) Bug #44988: client: track dirty inodes in a per-session list for effective cap flushing Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed Bug #45538: qa: Fix string/byte comparison mismatch in test_exports Bug #45663: luminous to nautilus upgrade Bug #45664: libcephfs: FAILED LibCephFS.LazyIOMultipleWritersOneReader Bug #46022: qa: test_strays num_purge_ops violates threshold 34/16 Bug #46218: mds: Add inter MDS messages to the corpus and enforce versioning Bug #46357: qa: Error downloading packages Bug #46403: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())" Bug #46438: mds: add vxattr for querying inherited layout Bug #46504: pybind/mgr/volumes: self.assertTrue(check < timo) fails Bug #46507: qa: test_data_scan: "show inode" returns ENOENT Bug #46535: mds: Importer MDS failing right after EImportStart event is journaled, causes incorrect blacklisting of client session Bug #46609: mds: CDir.cc: 956: FAILED ceph_assert(auth_pins == 0) Bug #46648: mds: cannot handle hundreds+ of subtrees Bug #46747: mds: make rstats in CInode::old_inodes stable Bug #46809: mds: purge orphan objects created by lost async file creation Bug #46887: kceph: testing branch: hang in workunit by 1/2 clients during tree export Bug #46902: mds: CInode::maybe_export_pin is broken Bug #47054: mgr/volumes: Handle potential errors in readdir cephfs python binding Bug #47236: Getting "Cannot send after transport endpoint shutdown" after changing subvolume access mode Bug #47276: MDSMonitor: add command to rename file systems Bug #47292: cephfs-shell: test_df_for_valid_file failure Bug #47389: ceph fs volume create fails to create pool Bug #47678: mgr: include/interval_set.h: 466: ceph_abort_msg("abort() called") Bug #47679: kceph: kernel does not open session with MDS importing subtree Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process Bug #47979: qa: test_ephemeral_pin_distribution failure Bug #48075: qa: AssertionError: 12582912 != 'infinite' Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client())) Bug #48231: qa: test_subvolume_clone_in_progress_snapshot_rm is racy Bug #48411: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all failed to reach desired subtree state Bug #48422: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_nodeid())) Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client session 4564 (v1:172.21.15.47:0/603539598)" Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation' Bug #48562: qa: scrub - object missing on disk; some files may be lost Bug #48640: qa: snapshot mismatch during mds thrashing Bug #48678: client: spins on tick interval Bug #48679: client: items pinned in cache preventing unmount Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)" Bug #48700: client: Client::rmdir() may fail to remove a snapshot Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient) Bug #48771: qa: iogen: workload fails to cause balancing Bug #48772: qa: pjd: not ok 9, 44, 80 Bug #48773: qa: scrub does not complete Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure Bug #48830: qa: :ERROR: test_idempotency Bug #48831: qa: ERROR: test_snapclient_cache Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted Bug #48833: snap_rm hang during osd thrashing Bug #48835: qa: add ms_mode random choice to kclient tests Bug #48873: test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster deployment failed Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings Bug #48886: mds: version MMDSCacheRejoin Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs Fix #46885: pybind/mgr/mds_autoscaler: add test for MDS scaling with cephadm Fix #47931: Directory quota optimization Fix #48027: qa: add cephadm tests for CephFS in QA Fix #48683: mds/MDSMap: print each flag value in MDSMap::dump Fix #48802: mds: define CephFS errors that replace standard errno values Feature #6373: kcephfs: qa: test fscache Feature #7320: qa: thrash directory fragmentation Feature #17434: qa: background rsync task for FS workunits Feature #17835: mds: enable killpoint tests for MDS-MDS subtree export Feature #18154: qa: enable mds thrash exports tests Feature #24725: mds: propagate rstats from the leaf dirs up to the specified diretory Feature #36481: separate out the 'p' mds auth cap into separate caps for quotas vs. choosing pool layout Feature #36483: extend the mds auth cap "path=" syntax to enable something like "path=/foo/bar/*" Feature #36663: mds: adjust cache memory limit automatically via target that tracks RSS Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm Feature #41220: mgr/volumes: add test case for blacklisted clients Feature #41566: mds: support rolling upgrades Feature #42873: mgr/volumes: add GetCapacity API/command for `fs volume` Feature #42874: mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume` Feature #42875: mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolumeCapabilities Feature #44190: qa: thrash file systems during workload tests Feature #44455: cephfs: add recursive unlink RPC Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery Feature #46680: pybind/mgr/mds_autoscaler: deploy larger or smaller (RAM) MDS in response to MDS load Feature #46746: mgr/nfs: Add interface to accept yaml file for creating clusters Feature #46865: client: add metric for number of pinned capabilities Feature #47172: mgr/nfs: Add support for RGW export Feature #47264: "fs authorize" subcommand should work for multiple FSs too Feature #48394: mds: defer storing the OpenFileTable journal Feature #48404: client: add a ceph.caps vxattr Feature #48509: mds: dmClock based subvolume QoS scheduler Feature #48577: pybind/mgr/volumes: support snapshots on subvolumegroups Feature #48619: client: track (and forward to MDS) average read/write/metadata latency Feature #48682: MDSMonitor: add command to print fs flags Feature #48943: cephfs-mirror: display cephfs mirror instances in `ceph status` command Feature #48944: pybind/mirroring: add subvolume/subvolumegroup interfaces for snapshot mirroring Feature #48953: cephfs-mirror: suppport snapshot mirror of subdirectories and/or ancestors of a mirrored directory Cleanup #46802: mds: do not use asserts for RADOS failures Documentation #43034: doc: document large omap warning for directory fragmentation Documentation #45573: doc: client: client_reconnect_stale=1 Documentation #47449: doc: complete ec pool configuration section with an example Documentation #48017: snap-schedule doc Documentation #48914: mgr/nfs: Update about user config