# v13.0.0 Mimic * Bug #925: mds: update replica snaprealm on rename * Feature #1894: mon: implement internal heartbeating * Bug #1938: mds: snaptest-2 doesn't pass with 3 MDS system * Bug #3254: mds: Replica inode's parent snaprealms are not open * Bug #4212: mds: open_snap_parents isn't called all the times it needs to be * Fix #4708: MDS: journaler pre-zeroing is dangerous * Fix #5268: mds: fix/clean up file size/mtime recovery code * Feature #6325: mon: mon_status should make it clear when the mon has connection issues * Feature #7567: mon: 'service network', 'cluster network', 'admin network' * Cleanup #10506: mon: get rid of QuorumServices * Bug #10915: client: hangs on umount if it had an MDS session evicted * Feature #13688: mds: performance: journal inodes with capabilities to limit rejoin time on failover * Feature #16775: MDS command for listing open files * Bug #16842: mds: replacement MDS crashes on InoTable release * Feature #18053: Minimize log-based recovery in the acting set * Bug #18730: mds: backtrace issues getxattr for every file with cap on rejoin * Bug #20549: cephfs-journal-tool: segfault during journal reset * Bug #20593: mds: the number of inode showed by "mds perf dump" not correct after trimming * Bug #20596: MDSMonitor: obsolete `mds dump` and other deprecated mds commands * Feature #20606: mds: improve usability of cluster rank manipulation and setting cluster up/down * Feature #20607: MDSMonitor: change "mds deactivate" to clearer "mds rejoin" * Feature #20608: MDSMonitor: rename `ceph fs set cluster_down` to `ceph fs set joinable` * Feature #20609: MDSMonitor: add new command `ceph fs set down` to bring the cluster down * Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way * Subtask #20864: kill allow_multimds * Bug #20988: client: dual client segfault with racing ceph_shutdown * Feature #21084: auth: add osd auth caps based on pool metadata * Feature #21156: mds: speed up recovery with many open inodes * Bug #21309: mon/OSDMonitor: deleting pool while pgs are being created leads to assert(p != pools.end) in update_creating_pgs() * Bug #21412: cephfs: too many cephfs snapshots chokes the system * Bug #21500: list bucket which enable versioning get wrong result when user marker * Bug #21745: mds: MDBalancer using total (all time) request count in load statistics * Bug #21765: auth|doc: fs authorize error for existing credentials confusing/unclear * Bug #21896: Bucket policy evaluation is not carried out for DeleteBucketWebsite * Bug #21962: Policy parser may or may not dereference uninitialized boost::optional sometimes * Feature #21995: ceph-fuse: support nfs export * Bug #22002: rgw: add cors header rule check in cors option request * Bug #22038: ceph-volume-client: rados.Error: command not known * Bug #22045: OSDMonitor: osd down by monitor is delayed * Feature #22097: mds: change mds perf counters can statistics filesystem operations number and latency * Bug #22296: Librgw shutdown uncorreclty * Bug #22330: ec: src/common/interval_map.h: 161: FAILED assert(len > 0) * Feature #22371: mds: implement QuotaRealm to obviate parent quota lookup * Bug #22413: can't delete object from pool when Ceph out of space * Bug #22416: s3cmd move object error * Feature #22417: support purge queue with cephfs-journal-tool * Bug #22418: RGW doesn't check time skew in auth v4 http header request * Bug #22428: mds: don't report slow request for blocked filelock request * Bug #22525: auth: ceph auth add does not sanity-check caps * Bug #22526: AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile' * Bug #22530: pool create cmd's expected_num_objects is not correctly interpreted * Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core * Bug #22541: put bucket policy panics RGW process * Bug #22608: Missing TrackedOp events "header_read", "throttled", "all_read" and "all_read" * Bug #22683: client: coredump when nfs-ganesha use ceph_ll_get_inode() * Bug #22717: mgr: prometheus: impossible to join labels from metadata metrics. * Fix #22718: mgr: prometheus: missed osd commit\apply latency metrics. * Bug #22802: libcephfs: allow setting default perms * Bug #22820: rgw_file: avoid fragging thread_local log buffer * Bug #22827: rgw_file: FLAG_EXACT_MATCH actually should be enforced * Bug #22829: ceph-fuse: uses up all snap tags * Bug #22839: MDSAuthCaps (unlike others) still require "allow" at start * Bug #22933: client: add option descriptions and review levels (e.g. LEVEL_DEV) * Bug #22948: client: wire up ceph_ll_readv and ceph_ll_writev * Bug #22990: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OSDMAP_FLAGS)" in cluster log" * Bug #22993: qa: kcephfs thrash sub-suite does not ignore MON_DOWN * Bug #23028: client: allow client to use caps that are revoked but not yet returned * Bug #23032: mds: underwater dentry check in CDir::_omap_fetched is racy * Bug #23041: ceph-fuse: clarify -i is not a valid option * Bug #23059: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCache::request_get(metareqid_t) * Documentation #23081: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter' * Bug #23084: doc: update ceph-fuse with FUSE options * Bug #23094: mds: add uptime to status asok command * Bug #23098: ceph-mds: prevent MDS names starting with a digit * Bug #23116: cephfs-journal-tool: add time to event list * Bug #23172: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades * Bug #23211: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL * Bug #23214: doc: Fix -d option in ceph-fuse doc * Bug #23247: doc: distinguish versions in ceph-fuse * Bug #23248: ceph-fuse: trim ceph-fuse -V output * Documentation #23271: doc: create install/setup guide for NFS-Ganesha w/ CephFS * Bug #23288: ceph-fuse: Segmentation fault --localize-reads * Bug #23291: client: add way to sync setattr operations to MDS * Bug #23293: client: Client::_read returns buffer length on success instead of bytes read * Bug #23299: rgw_file: post deadlock fix, direct deleted RGWFileHandle objects can remain in handle table * Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an address * Bug #23358: vstart.sh gives obscure error of dashboard dependencies missing * Bug #23393: ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings * Bug #23394: nfs-ganesha: check cache configuration when exporting FSAL_CEPH * Documentation #23427: doc: create doc outlining steps to bring down cluster * Bug #23436: Client::_read() always return 0 when reading from inline data * Bug #23446: ceph-fuse: getgroups failure causes exception * Bug #23448: nfs-ganesha: fails to parse rados URLs with '.' in object name * Bug #23452: mds: assertion in MDSRank::validate_sessions * Bug #23491: fs: quota backward compatibility * Bug #23509: ceph-fuse: broken directory permission checking * Feature #23513: ceph_authtool: add mode option * Bug #23518: mds: crash when failover * Bug #23530: mds: kicked out by monitor during rejoin * Bug #23532: doc: create PendingReleaseNotes and add dev doc for openfile table purpose and format * Bug #23538: mds: fix occasional dir rstat inconsistency between multi-MDSes * Bug #23541: client: fix request send_to_auth was never really used * Bug #23560: mds: mds gets significantly behind on trimming while creating millions of files (cont.) * Bug #23567: MDSMonitor: successive changes to max_mds can allow hole in ranks * Documentation #23568: doc: outline the steps for upgrading an MDS cluster * Bug #23569: mds: counter decay incorrect * Bug #23571: mds: make sure that MDBalancer uses heartbeat info from the same epoch * Bug #23582: MDSMonitor: mds health warnings printed in bad format * Documentation #23583: doc: update snapshot doc to account for recent changes * Bug #23600: assert(0 == "BUG!") attached in EventCenter::create_file_event * Bug #23602: mds: handle client requests when mds is stopping * Documentation #23613: doc: add description of new fs-client auth profile * Bug #23615: qa: test for "snapid allocation/deletion mismatch with monitor" * Bug #23619: v13.0.2 tries to build dashboard on arm64 * Feature #23623: mds: mark allow_snaps true by default * Bug #23624: cephfs-foo-tool crashes immediately it starts * Bug #23625: mds: sessions opened by journal replay do not get dirtied properly * Bug #23643: qa: osd_mon_report_interval typo in test_full.py * Bug #23652: client: fix gid_count check in UserPerm->deep_copy_from() * Bug #23658: MDSMonitor: crash after assigning standby-replay daemon in multifs setup * Bug #23662: osd: regression causes SLOW_OPS warnings in multimds suite * Bug #23665: ceph-fuse: return proper exit code * Bug #23697: mds: load balancer fixes * Backport #23705: jewel: ceph-fuse: broken directory permission checking * Bug #23714: slow ceph_ll_sync_inode calls after setattr * Bug #23753: "Error ENXIO: problem getting command descriptions from osd.4" in upgrade:kraken-x-luminous-distro-basic-smithi * Bug #23755: qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays) * Bug #23762: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap * Bug #23764: MDSMonitor: new file systems are not initialized with the pending_fsmap epoch * Bug #23766: mds: crash during shutdown_pass * Bug #23768: MDSMonitor: uncommitted state exposed to clients/mdss * Bug #23769: osd/EC: slow/hung ops in multimds suite test * Bug #23774: common: build warning in Preforker * Documentation #23777: doc: description of OSD_OUT_OF_ORDER_FULL problem * Bug #23778: rados commands suddenly take a very long time to finish * Bug #23781: ceph-detect-init still uses python's platform lib * Bug #23785: "test_prometheus (tasks.mgr.test_module_selftest.TestModuleSelftest) ... ERROR" in rados * Bug #23787: luminous: "osd-scrub-repair.sh'" failures in rados * Bug #23794: strtoll is broken for hex number string * Bug #23799: MDSMonitor: creates invalid transition from up:creating to up:shutdown * Bug #23800: MDSMonitor: setting fs down twice will wipe old_max_mds * Bug #23812: mds: may send LOCK_SYNC_MIX message to starting MDS * Bug #23813: client: "remove_session_caps still has dirty|flushing caps" when thrashing max_mds * Bug #23814: mds: newly active mds aborts may abort in handle_file_lock * Bug #23815: client: avoid second lock on client_lock * Bug #23827: osd sends op_reply out of order * Bug #23829: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25) * Bug #23848: mds: stuck shutdown procedure * Bug #23873: cephfs does not count st_nlink for directories correctly? * Bug #23880: mds: scrub code stuck at trimming log segments * Bug #23919: mds: stuck during up:stopping * Bug #23923: mds: stopping rank 0 cannot shutdown until log is trimmed * Bug #23927: qa: test_full failure in test_barrier * Bug #23928: qa: spurious cluster "[WRN] Manager daemon y is unresponsive. No standby daemons available." in cluster log * Bug #23949: osd: "failed to encode map e19 with expected crc" in cluster log " * Bug #23960: mds: scrub on fresh file system fails * Bug #23975: qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior