# v13.2.0 * Feature #22370: cephfs: add kernel client quota support * Feature #22372: kclient: implement quota handling using new QuotaRealm * Feature #23695: VolumeClient: allow ceph_volume_client to create 'volumes' without namespace isolation * Documentation #23775: PendingReleaseNotes: add notes for major Mimic features * Bug #23826: mds: assert after daemon restart * Bug #23855: mds: MClientCaps should carry inode's dirstat * Bug #23885: MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX health warnings during FS creation * Bug #23894: ceph-fuse: missing dentries in readdir result * Bug #24002: qa: check snap upgrade on multimds cluster * Bug #24003: build-integration-branch script can fail with UnicodeEncodeError * Backport #24018: cannot build on bionic: Java_JAVAH_EXECUTABLE-NOTFOUND: not found * Backport #24026: mimic: pg-upmap cannot balance in some case * Backport #24027: mimic: ceph_daemon.py format_dimless units list index out of range * Bug #24033: rados: not all exceptions accept keyargs * Bug #24037: osd: Assertion `!node_algorithms::inited(this->priv_value_traits().to_node_ptr(value))' failed. * Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1) * Bug #24047: MDCache.cc: 5317: FAILED assert(mds->is_rejoin()) * Backport #24062: mimic: Misnamed S3 operation * Bug #24072: mds: race with new session from connection and imported session * Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually executed. * Bug #24080: Dashboard: Prevent RGW API user deletion * Bug #24081: Dashboard: Float numbers incorrectly formatted * Bug #24087: client: assert during shutdown after blacklisted * Bug #24089: mds: print slow requests to debug log when sending health WRN to monitors (if < ~5) * Bug #24097: Dashboard navbar does not respond for mobile-like browser window widths * Bug #24101: mds: deadlock during fsstress workunit with 9 actives * Backport #24103: mimic: mon: snap delete on deleted pool returns 0 without proper payload * Backport #24104: mimic: run cmd 'ceph daemon osd.0 smart' cause osd daemon Segmentation fault * Backport #24113: mimic: selinux denials with ceph-deploy/ceph-volume lvm device * Bug #24115: Dashboard: Filesystem page shows moment.js deprecation warning * Bug #24118: mds: crash when using `config set` on tracked configs * Backport #24135: mimic: Add support for obtaining a list of available compression options * Backport #24149: mimic: Eviction still raced with scrub due to preemption * Backport #24154: mimic: tcmalloc Attempt to free invalid pointer 0x55de11f2a540 in rocksdb::LRUCache::~LRUCache during mkfs->_open_db * Backport #24155: mimic: [rbd-mirror] potential deadlock when running asok 'flush' command * Backport #24157: mimic: mds: crash when using `config set` on tracked configs * Backport #24186: mimic: client: segfault in trim_caps * Backport #24187: mimic: mds didn't update file's max_size * Backport #24191: mimic: fs: reduce number of helper debug messages at level 5 for client * Bug #24194: rgw-multisite: Segmental fault when use different rgw_md_log_max_shards among zones * Backport #24195: mimic: mon: slow op on log message * Backport #24200: mimic: PrimaryLogPG::try_flush_mark_clean mixplaced ctx release * Backport #24202: mimic: client: fails to respond cap revoke from non-auth mds * Backport #24206: mimic: mds: broadcast quota to relevant clients when quota is explicitly set * Backport #24209: mimic: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail * Backport #24213: mimic: Module 'balancer' has failed: could not find bucket -14 * Bug #24237: mon: monitors run out of space during snapshot workflow * Backport #24248: mimic: SharedBlob::put() racy * Backport #24249: mimic: status module output going to stderr * Backport #24250: mimic: Error message with 'undefined' string * Backport #24254: mimic: kceph: umount on evicted client blocks forever * Backport #24255: mimic: qa: kernel_mount.py umount must handle timeout arg * Backport #24256: mimic: osd: Assertion `!node_algorithms::inited(this->priv_value_traits().to_node_ptr(value))' failed. * Backport #24259: mimic: crush device class: Monitor Crash when moving Bucket into Default root * Feature #24266: mgr/dashboard: support multiple user accounts * Feature #24267: mgr/dashboard: support roles and privileges * Feature #24273: mgr/dashboard: Add backend support for changing configuration settings via the REST API * Bug #24281: mgr/dashboard: Reduce 'dimlessBinaryPipe' precision * Bug #24288: mgr/dashboard: Documentation link opens up in same tab * Backport #24294: mimic: control-c on ceph cli leads to segv * Backport #24297: mimic: RocksDB compression is not supported at least on Debian. * Backport #24312: mimic: mgr/dashboard: Documentation link opens up in same tab * Backport #24340: mimic: mds memory leak * Backport #24345: mimic: mds: root inode's snaprealm doesn't get journalled correctly * Backport #24380: mimic: omap_digest handling still not correct * Bug #24437: Mimic build fails with -DWITH_RADOSGW=0 * Bug #24450: OSD Caught signal (Aborted) * Support #24457: large omap object * Bug #24459: Debian stretch for Ceph luminous lacks packages * Bug #24576: 404 on dashboard * Bug #24606: error build ceph-13.2.0 for i586 * Bug #24639: [segfault] segfault in BlueFS::read * Bug #24678: ceph-mon segmentation fault after setting pool size to 1 on degraded cluster * Bug #24689: Warning when importing dashboard key or cert * Feature #24773: RBD mirroring delay replication * Support #24818: RBD mirroring delay replication