# v14.0.0 Nautilus * Cleanup #4387: mds: EMetaBlob::client_reqs doesn't need to be a list * Feature #5915: msgr: allow binding to multiple addresses, address types * Feature #5916: msgr: allow connect to multi-addr peer * Feature #8844: asserts to log message to ceph log * Feature #11172: mds: inode filtering on 'dump cache' asok * Feature #12282: mds: progress/abort/pause interface for ongoing scrubs * Feature #13231: kclient: support SELinux * Feature #14456: mon: prevent older/incompatible clients from mounting the file system * Bug #15255: kclient: lost filesystem operations (on mds disconnect?) * Bug #16640: libcephfs: Java bindings failing to load on CentOS * Feature #17230: ceph_volume_client: py3 compatible * Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemons: set(['mds.dir_split'])) * Feature #20598: mds: revisit LAZY_IO * Feature #20611: MDSMonitor: do not show cluster health warnings for file system intentionally marked down * Bug #21014: fs: reduce number of helper debug messages at level 5 for client * Bug #21416: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminous * Bug #21754: mds: src/osdc/Journaler.cc: 402: FAILED assert(!r) * Bug #21848: client: re-expand admin_socket metavariables in child process * Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost) * Bug #22624: filestore: 3180: FAILED assert(0 == "unexpected error"): error (2) No such file or directory not handled on operation 0x55e1ce80443c (21888.1.0, or op 0, counting from 0) * Bug #22977: High CPU load caused by operations on onode_map * Documentation #22989: doc: add documentation for MDS states * Bug #23332: kclient: with fstab entry is not coming up reboot * Feature #23362: mds: add drop_cache command * Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime * Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue * Bug #23797: qa: cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) * Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail * Bug #23958: mds: scrub doesn't always return JSON results * Cleanup #24001: MDSMonitor: remove vestiges of `mds deactivate` * Bug #24004: mds: curate priority of perf counters sent to mgr * Bug #24028: CephFS flock() on a directory is broken * Bug #24052: repeated eviction of idle client until some IO happens * Bug #24053: qa: kernel_mount.py umount must handle timeout arg * Bug #24054: kceph: umount on evicted client blocks forever * Bug #24090: mds: fragmentation in QA is slowing down ops enough for WRNs * Documentation #24093: doc: Update *remove a metadata server* * Bug #24111: mds didn't update file's max_size * Bug #24129: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) timeout not long enough * Bug #24133: mds: broadcast quota to relevant clients when quota is explicitly set * Bug #24137: client: segfault in trim_caps * Bug #24138: qa: support picking a random distro using new teuthology $ * Bug #24172: client: fails to respond cap revoke from non-auth mds * Bug #24173: ceph_volume_client: allow atomic update of RADOS objects * Feature #24176: osd: add command to drop OSD cache * Bug #24177: qa: fsstress workunit does not execute in parallel on same host without clobbering files * Feature #24233: Add new command ceph mds status * Bug #24236: cephfs-journal-tool: journal inspect reports DAMAGED for purge queue when it's empty * Bug #24238: test gets ENOSPC from bluestore block device * Bug #24239: cephfs-journal-tool: Importing a zero-length purge_queue journal breaks its integrity. * Bug #24240: qa: 1 mutations had unexpected outcomes * Feature #24262: remove ceph-disk * Feature #24268: mgr/dashboard: support single-sign-on (SSO) * Bug #24269: multimds pjd open test fails * Feature #24270: mgr/dashboard: Add support for managing individual OSD settings/characteristics in the backend * Feature #24271: mgr/dashboard: Add support for managing EC profiles * Feature #24272: mgr/dashboard: Add support for moving RBD images into the trash * Feature #24275: mgr/dashboard: Add support for managing RBD mirroring * Bug #24284: cephfs: allow prohibiting user snapshots in CephFS * Feature #24286: tools: create CephFS shell * Bug #24306: mds: use intrusive_ptr to manage Message life-time * Feature #24335: Get the user metadata of the user used to sign the request * Bug #24342: Monitor's routed_requests leak * Bug #24343: mds: root inode's snaprealm doesn't get journalled correctly * Bug #24400: CephFS - All MDS went offline and required repair of filesystem * Bug #24425: create iscsi gateway stop with "The first gateway defined must be the local machine" * Feature #24429: fs: implement snapshot count limit by subtree * Feature #24430: libcephfs: provide API to change umask * Bug #24435: doc: incorrect snaprealm format upgrade process in mimic release note * Feature #24436: mgr/dashboard: Replace RGW proxy controller * Bug #24440: common/DecayCounter: set last_decay to current time when decoding decay counter * Feature #24444: cephfs: make InodeStat, DirStat, LeaseStat versioned * Feature #24447: mgr/dashboard: Add UI support for managing roles * Feature #24455: mgr/dashboard: Add possibility to set config options in MON database * Tasks #24460: Make notification and tasks look more human readable * Feature #24465: client: allow client to leave state intact on MDS when tearing down objects * Bug #24467: mds: low wrlock efficiency due to dirfrags traversal * Bug #24491: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator * Bug #24517: "Loading libcephfs-jni: Failure!" in fs suite * Bug #24518: "pjd.sh: line 7: cd: too many arguments" in fs suite * Bug #24520: "[WRN] MDS health message (mds.0): 2 slow requests are blocked > 30 sec"" in powercycle * Bug #24522: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi * Bug #24533: PurgeQueue sometimes ignores Journaler errors * Bug #24548: mgr/dashboard: Link to documentation on RGW page * Bug #24557: client: segmentation fault in handle_client_reply * Feature #24571: mgr/dashboard: Move Cluster/Audit logs from front page to dedicated "Logs" page * Bug #24579: client: returning garbage (?) for readdir * Documentation #24580: doc: complete documentation for `ceph fs` administration commands * Feature #24604: Implement "cephfs-journal-tool event splice" equivalent for purge queue * Feature #24605: mgr/dashboard: Add flags information to configuration doc page * Feature #24643: libcephfs: add ceph_futimens support * Bug #24644: cephfs-journal-tool: wrong layout info used * Feature #24651: mgr/dashboard: Improve SSL certificate import to no longer require a Mgr restart * Bug #24664: osd: crash in OpTracker::unregister_inflight_op via OSD::get_health_metrics * Bug #24665: qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_backtrace_path(ino).startswith("stray")) * Bug #24667: osd: SIGSEGV in MMgrReport::encode_payload * Cleanup #24671: Move hardcoded colors in components to a separate file; move in-line styles in templates to .scss * Bug #24677: mgr/dashboard: RGW proxy can't handle self-signed SSL certificates * Bug #24680: qa: iogen.sh: line 7: cd: too many arguments * Bug #24721: mds: accept an inode number in hex for dump_inode command * Feature #24724: client: put instance/addr information in status asok command * Feature #24727: mgr/dashboard: Cluster-wide OSD Flags modal should stay open when error occurs * Bug #24731: Dashboard Error on mimic * Bug #24753: doc: missing mimic in os-recommendations * Documentation #24755: mgr/dashboard: Add Documentation about accessing the Dashboard's API documentation * Feature #24763: mgr/dashboard: Automatic generation of REST API documentation based on Python docstrings * Feature #24765: mgr/dashboard: Allow OSDs to be taken down / out in the OSD overview * Feature #24776: mgr/dashboard: Add config option form to documentation page * Subtask #24778: Create common Card component for info shown in landing page * Bug #24780: Some cephfs tool commands silently operate on only rank 0, even if multiple ranks exist * Bug #24819: "terminate called after throwing an instance of 'ceph::buffer::malformed_input'" in upgrade:luminous-x-master * Cleanup #24820: overhead of g_conf->get_val("config name") is high * Feature #24822: mgr/dashboard: Display logged in user * Bug #24823: mds: deadlock when setting config value via admin socket * Cleanup #24839: qa: move mds/client config to qa from teuthology ceph.conf.template * Bug #24840: mds: explain delayed client_request due to subtree migration * Bug #24849: client: statfs inode count odd * Bug #24852: mds: dump MDSMap epoch to log at low debug * Bug #24853: mds: dump recent (memory) log messages before respawning due to being removed from MDSMap * Bug #24855: mds: reduce debugging for missing inodes during subtree migration * Bug #24856: mds may get discontinuous mdsmap * Bug #24858: qa: test_recovery_pool tries asok on wrong node * Bug #24870: ceph-debug-docker: python3 libraries not installed in docker image * Bug #24872: qa: client socket inaccessible without sudo * Bug #24879: mds: create health warning if we detect metadata (journal) writes are slow * Bug #24881: unhealthy heartbeat map during subtree migration * Bug #24893: client: add ceph_ll_fallocate * Bug #24897: client: writes rejected by quota may truncate/zero parts of file * Bug #24899: qa: multifs requires 4 mds but gets only 2 * Bug #24902: Mimic Dashboard does not allow deletion of snapshots containing "+" in their name * Bug #24918: Ubuntu: python3-cephfs missing dependency on python3-rados * Bug #24919: rpm: missing dependency on python34-ceph-argparse from python34-cephfs (and others?) * Bug #24920: teuthology is not installing python3-cephfs/python3-rados/etc. (Ubuntu) or python34-cephfs (CentOS) or (what else?) * Documentation #24924: doc: typo in crush-map docs * Bug #24925: qa: pjd test failed ctime change unexpected result * Bug #24939: mgr/dashboard: Frontend unit tests fail * Bug #24940: mgr/dashboard: How to interactively debug frontend unit tests * Bug #24956: osd: parent process need to restart log service after fork, or ceph-osd will not work correctly when the option log_max_new in ceph.conf set to zero * Feature #24962: Add Fault injection for watch/notify in RGW cache * Bug #24963: Robustly Notify Cache * Feature #24998: monitoring: Port and submit the ceph-metrics Grafana dashboards * Feature #24999: mgr/dashboard: Embed Grafana Dashboards into the Mgr Dashboard UI * Bug #25007: common: Cond.h:C_SaferCond does not check done before calling cond.WaitInterval, creating a race condition * Bug #25008: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in powercycle * Feature #25013: mds: add average session age (uptime) perf counter * Bug #25027: mon: src/msg/async/AsyncConnection.cc: 1710: FAILED assert(can_write == WriteStatus::NOWRITE) * Bug #25068: mgr/dashboard: RGW is not working if an URL prefix is defined * Cleanup #25075: mgr/dashboard: Use human readable units on the OSD I/O graphs * Bug #25094: mgr/dashboard: Only list tasks that user is authorized to see * Bug #25099: mds: don't use dispatch queue length to calculate mds load * Bug #25107: common: (mon) command sanitization accepts floats when Int type is defined resulting in exception fault in ceph-mon * Cleanup #25111: mds: use vector to manage Contexts rather than a list * Bug #25113: mds: allows client to create ".." and "." dirents * Feature #25131: mds: optimize the way how max export size is enforced * Bug #25135: rgw:s3test failed to pass test_object_set_get_non_utf8_metadata * Bug #25139: Task wrapper should not call notifyTask if a task fails * Bug #25141: CephVolumeClient: delay required after adding data pool to MDSMap * Bug #25153: output format is invalid of the crush tree json dumper * Feature #25156: mgr/dashboard: Erasure code profile management * Feature #25158: Add pool cache tiering details tab * Feature #25188: mds: configurable timeout for client eviction * Bug #25190: mgr/dashboard: RestClient can't handle ProtocolError exceptions * Bug #25212: mgr/dashboard: Remove notifications for successful actions * Bug #25213: handle ceph_ll_close on unmounted filesystem without crashing * Bug #25215: mds: changing mds_cache_memory_limit causes boost::bad_get exception * Bug #25228: mds: recovering mds receive export_cancel message * Feature #25230: mgr/dashboard: Provide a full screen view for embedded grafana dashboard * Feature #25231: mgr/dashboard: Provide ability to create a user by cloning another * Bug #26834: mds: use self CPU usage to calculate load * Bug #26852: cephfs-shell: add CMake directives to install the shell * Feature #26853: cephfs-shell: add batch file processing * Bug #26854: cephfs-shell: add support to set the ceph.conf file via command line argument * Feature #26855: cephfs-shell: add support to execute commands from arguments * Bug #26858: mds: reset heartbeat map at potential time-consuming places * Bug #26860: client: requests that do name lookup may be sent to wrong mds * Bug #26861: mgr/dashboard: Email addr is set to false when RGW user is modified * Bug #26865: mds: src/mds/CInode.cc: 2330: FAILED assert(!"unmatched rstat" == g_conf()->mds_verify_scatter) * Bug #26867: client: missing temporary file break rsync * Feature #26872: mgr/dashboard Add refresh interval to the dashboard landing page * Bug #26874: cephfs-shell: unable to copy files to a sub-directory from local file system. * Bug #26894: mds: crash when dumping ops in flight * Bug #26898: MDSMonitor: note ignored beacons/map changes at higher debug level * Bug #26899: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are processed before other client messages * Bug #26900: qa: reduce slow warnings arising due to limited testing hardware * Feature #26925: cephfs-data-scan: print the max used ino * Bug #26926: mds: migrate strays part by part when shutdown mds * Bug #26959: mds: use monotonic clock for beacon message timekeeping * Bug #26961: mds: fix instances of wrongly sending client messages outside of MDSRank::send* * Bug #26962: mds: use monotonic clock for beacon sender thread waits * Bug #26967: qa: kcephfs suite has kernel build failures * Bug #26973: mds: MDBalancer::try_rebalance() may stop prematurely * Feature #26974: mds: provide mechanism to allow new instance of an application to cancel old MDS session * Feature #26975: Rados level IO priority for OSD operations * Bug #26995: qa: specify distro/kernel matrix to test in kclient qa-suite * Feature #27047: mgr/dashboard: Landing Page - Set visibility of cards depending on the user's role * Feature #27049: mgr/dashboard: retrieve "Data Health" info from dashboard backend * Feature #27050: mgr/dashboard: Landing Page Enhancements * Bug #27051: client: cannot list out files created by another ceph-fuse client * Documentation #27209: doc: document state of kernel client feature parity with ceph-fuse * Bug #27657: mds: retry remounting in ceph-fuse on dcache invalidation * Bug #34314: mgr/dashboard: Unable to add RBD image after selecting an existing one * Feature #34315: mgr/dashboard: Display RGW user/bucket quota max size in human readable form * Bug #34320: mgr/dashboard: Read/Write OPS in pool stats always show 0 * Bug #34528: mgr/dashboard: Disallow editing of read-only config options * Feature #34530: mgr/dashboard: CdSubmitButton should be disabled if the form was not modified * Cleanup #34533: mgr/dashboard: Enhance layout of the config options page * Documentation #34539: doc: fixed hit set type link * Bug #35250: mds: beacon spams is_laggy message * Bug #35251: msg: "challenging authorizer" messages appear at debug_ms=0 * Feature #35448: mgr/dashboard: Add support for managing individual OSD settings/characteristics in the frontend * Feature #35540: mgr/dashboard: Provide a simple way to throttle or increase the cluster's rebuild performance * Bug #35546: RADOS: probably missing clone location for async_recovery_targets * Feature #35684: mgr/dashboard: CRUSH map viewer/architectural overview * Bug #35685: mgr/dashboard: Unable to edit user when making an accidental change to the password field * Bug #35686: mgr/dashboard: Missing tooltip on the settings icon * Cleanup #35688: mgr/dashboard: Community branding & styling recommendations * Cleanup #35690: Proposed Masthead * Cleanup #35691: mgr/dashboard: Proposed Landing Page * Cleanup #35692: Proposed background color * Bug #35720: evicting client session may block finisher thread * Feature #35811: mgr/dashboard: Implement OSD purge * Bug #35828: qa: RuntimeError: FSCID 10 has no rank 1 * Bug #35829: qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_default attribute * Bug #35834: mgr/dashboard: Frontend timeouts when RGW takes too long to respond * Bug #35848: MDSMonitor: lookup of gid in prepare_beacon that has been removed will cause exception * Bug #35850: mds: runs out of file descriptors after several respawns * Bug #35860: mon/OSDMonitor: cancel_report causes obsolete max_failed_since * Feature #35903: mgr/dashboard: Add support for managing iSCSI targets * Bug #35907: mgr/dashboard: Progress bar does not stop in TableKeyValueComponent * Bug #35916: mds: rctime may go back * Bug #35921: mgr/dashboard: Catch LookupError when checking the RGW status * Bug #35945: client: update ctime when modifying file content * Feature #35949: mgr/dashboard: Add pass-through MON-commands to REST API * Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted * Bug #36028: "ceph fs add_data_pool" applies pool application metadata incorrectly * Bug #36035: mds: MDCache.cc: 11673: abort() * Bug #36040: mon: Valgrind: mon (InvalidFree, InvalidWrite, InvalidRead) * Bug #36069: mgr/dashboard: Support http-only initialization * Cleanup #36075: qa: remove knfs site from future releases * Bug #36079: ceph-fuse: hang because it miss reconnect phase when hot standby mds switch occurs * Bug #36093: mds: fix mds damaged due to unexpected journal length * Bug #36103: ceph-fuse: add SELinux policy * Bug #36109: mgr/dashboard: The RGW backend doesn't handle IPv6 propertly * Bug #36114: mds: internal op missing events time 'throttled', 'all_read', 'dispatched' * Bug #36165: qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 /home/ubuntu/cephtest/mnt.0/client.0/tmp' * Bug #36167: msg: infinite recursion while reading messages * Feature #36173: mgr/dashboard: Add a 'clear filter' button to configuration page * Bug #36176: mgr/dashboard: Make backend API tests coverage work again * Documentation #36180: doc: Typo error on cephfs/fuse/ * Bug #36182: osd: hung op "osd.3 22 get_health_metrics reporting 2 slow ops, oldest is osd_op(mds.0.6:55075 3.7s0 3:edaf1c25:::1000000129e.00000010:head [trimtrunc 850@0] snapc 1=[] ondisk+write+known_if_redirected+full_force e22)" * Bug #36184: qa: add timeouts to workunits to bound test execution time in the event of crashes/bugs * Bug #36188: mgr/dashboard: Extend TableActionsComponent by Separator * Bug #36189: ceph-fuse client can't read or write due to backward cap_gen * Bug #36190: mgr/dashboard: Improve error message when backend is unreachable * Feature #36191: mgr/dashboard: Add support for managing RBD QoS * Bug #36192: Internal fragment of ObjectCacher * Feature #36193: mgr/dashboard: Audit REST API calls * Feature #36194: mgr: Add ability to trigger a cluster/audit log message from Python * Bug #36221: mds: rctime not set on system inode (root) at startup * Feature #36241: mgr/dashboard: Add support for managing Prometheus alerts * Bug #36252: cephfs: `ceph fs top` command * Bug #36271: src/common/interval_map.h: 161: FAILED ceph_assert(len > 0) * Documentation #36286: doc: fix broken fstab url in cephfs/fuse * Bug #36320: mds: cache drop command requires timeout argument when it is supposed to be optional * Bug #36325: mgr/dashboard: Performance counter progress bar keeps infinitely looping * Bug #36335: qa: infinite timeout on asok command causes job to die * Bug #36340: common: fix buffer advance length overflow to cause MDS crash * Bug #36349: mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm->get_newest_seq()) * Bug #36350: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds thrashing * Feature #36352: client: explicitly show blacklisted state via asok status command * Feature #36355: Pool management functionality * Cleanup #36356: mgr/dashboard: Check if Grafana dashboards exist * Feature #36357: Allow custom badges within badges component * Bug #36360: Return error to subscriber of task-wrapper * Bug #36362: Update PG update test * Bug #36365: qa: increase rm timeout for workunit cleanup * Bug #36366: luminous: qa: blogbench hang with two kclients and 3 active mds * Bug #36367: mds: wait shorter intervals to send beacon if laggy * Bug #36368: cephfs/tool: cephfs-shell have "no attribute 'decode'" err * Bug #36369: kclient: wanted caps takes a long time to release * Bug #36379: mds: cache drop command only requires read caps * Cleanup #36380: mds: remove cap requirement on ceph tell commands * Bug #36384: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r) * Bug #36387: "workunit test fs/snaps/snaptest-git-ceph.sh) on smithi031 with status 128" * Bug #36388: osd: "out of order op" * Bug #36390: qa: teuthology may hang on diagnostic commands for fuse mount * Bug #36394: mds: pending release note for state reclaim * Bug #36395: mds: Documentation for the reclaim mechanism * Bug #36399: mgr/status: fix fs status subcommand did not show standby-replay MDS' perf info * Bug #36400: mgr/dashboard: Table column dropdown uses system style checkboxes * Bug #36401: Redirect from '/block' to '/block/rbd' * Bug #36403: mgr/dashboard: Can not delete RBD snapshot * Feature #36413: make cephfs-data-scan reconstruct snaptable * Bug #36445: Missing requirement "python-werkzeug" for running the dashboard API tests * Bug #36450: qa: ceph_manager run call has wrong types of arguments * Bug #36466: mgr/dashboard: Helper component misses left padding * Bug #36468: mgr/dashboard: Table key component does not support class objects * Feature #36488: mgr/dashboard: Provide way to "opt in" to enabling the telemetry mgr plugin * Bug #36493: mds: remove MonClient reconnect when laggy * Bug #36497: FAILED ceph_assert(can_write == WriteStatus::NOWRITE) in ProtocolV1::replace() * Bug #36547: mds_beacon_grace and mds_beacon_interval should have a canonical setting * Feature #36560: mgr/dashboard: Allow renaming an existing Pool * Bug #36561: msg: big messenger performance regression * Bug #36573: mds: ms_handle_authentication does not hold mds lock * Feature #36585: allow nfs-ganesha to export named cephfs filesystems * Bug #36594: qa: pjd test appears to require more than 3h timeout for some configurations * Bug #36598: osd: "bluestore(/var/lib/ceph/osd/ceph-6) ENOENT on clone suggests osd bug" * Bug #36599: msg/async: should read data out of socket after shutdown(), before close() * Bug #36606: osd: checksum failure during upgrade test * Feature #36609: Avoid sending duplicate concurrent "getattr", "lookup", "ceph_sync_read" requests * Cleanup #36615: mgr/dashboard: Remove of interface methods in OSD list component * Cleanup #36616: mgr/dashboard: Simplify OSD test that tests if certain actions are disabled in the template * Bug #36632: mgr/dashboard:update Python dependency * Bug #36633: mgr/dashboard: No module named jwt * Bug #36651: ceph-volume-client: cannot set mode for cephfs volumes as required by OpenShift * Feature #36655: mgr/dashboard: Support roles for Grafana dashboards * Bug #36666: msg: rejoin message queued but not sent * Bug #36668: client: request next osdmap for blacklisted client * Bug #36669: client: displayed as the capacity of all OSDs when there are multiple data pools in the fS * Bug #36674: mgr/dashboard: Enable compression for backend requests * Feature #36675: mgr/dashboard: Provide API endpoint providing minimal health data * Bug #36676: qa: wrong setting for msgr failures * Cleanup #36680: Add custom timepicker * Feature #36681: rgw: Return tenant field in bucket_stats function * Bug #36703: MDS admin socket command `dump cache` with a very large cache will hang/kill the MDS * Feature #36707: client: support getfattr ceph.dir.pin extended attribute * Bug #36712: mgr/dashboard: Error message when accessing Cluster >> Hosts * Feature #36721: mgr/dashboard: UI displays fired alert notifications from prometheus * Feature #36723: mgr/dashboard: Show the graph of the alert expression * Feature #36724: mgr/dashboard: Be able to silences more than one alert * Bug #36740: mgr/dashboard: PG Stats, Pool usage and read/write ops missing from Pools table * Feature #37085: add command to bring cluster down rapidly * Subtask #37088: mgr/dashboard: Cluster menu E2E breadcrumb tests * Subtask #37283: mgr/dashboard: improve info shown in mgr info card * Bug #37290: mgr/dashboard: Failing QA test: test_safe_to_destroy * Tasks #37291: mgr/dashboard: Add a command to easily test e2e test with a already running dev server * Bug #37293: mgr/dashboard: 403 Forbidden error with Users with roles cephfs-manager, block-manager or pool-manager * Subtask #37294: mgr/dashboard: Block menu E2E breadcrumb tests * Bug #37295: mgr/dashboard: Block-Manager role does not allow listing existing RBD pools * Bug #37333: fuse client can't read file due to can't acquire Fr * Bug #37355: tasks.cephfs.test_volume_client fails with "ImportError: No module named 'ceph_argparse'" * Bug #37368: mds: directories pinned keep being replicated back and forth between exporting mds and importing mds * Bug #37371: mgr/dashboards: add permission check for showing "Logs" links in Landing Page cards' popovers * Bug #37385: mgr/dashboard: Failure to run unit tests (on Fedora 29 with Python 3) * Bug #37394: mds: PurgeQueue write error handler does not handle EBLACKLISTED * Bug #37399: mds: severe internal fragment when decoding xattr_map from log event * Bug #37401: mgr/dashboard: chart slice hiding is not remembered * Feature #37406: mgr/dashboard: Show config options description in 'OSD Recovery Priority" dialog * Bug #37431: msg/async: crash in the case STATE_OPEN_MESSAGE_READ_DATA * Bug #37464: race of updating wanted caps * Bug #37467: ceph-volume: RuntimeError: dictionary changed size during iteration * Documentation #37468: mgr/dashboard: Document custom RESTController endpoints * Tasks #37469: mgr/dashboard: Notification queue * Bug #37470: ceph-volume: Python3 - name 'raw_input' is not defined * Tasks #37471: mgr/dashboard: Settings Service * Bug #37499: AsyncConnection stuck * Bug #37505: mgr/dashboard: Using @cdEncodeNot raises runtime error * Bug #37516: mds: remove duplicated l_mdc_num_strays perfcounter set * Feature #37530: mgr/dashboard: Feature toggles * Bug #37534: mgr/dashboard: It's not possible to rename an existing pool via the UI * Bug #37543: mds: purge queue recovery hangs during boot if PQ journal is damaged * Bug #37546: client: do not move f->pos untill success write * Bug #37547: client: fix failure in quota size limitation when using samba * Bug #37566: mds: do not call Journaler::_trim twice * Bug #37567: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid item in purge queue * Bug #37568: CephFS remove snapshot result in slow ops * Bug #37573: fs status command broken in py3-only environments * Bug #37593: ec pool lost data due to snap clone * Bug #37594: mds: mds state change race * Cleanup #37619: mgr/dashboard: Cleanup of summary refresh test * Bug #37620: rbd-mirror and radosgw packages should requires ceph-base * Bug #37639: mds: output client IP of blacklisted/evicted clients to cluster log * Bug #37644: extend reconnect period when mds is busy * Bug #37652: bluestore: "fsck warning: legacy statfs record found, suggest to run store repair to get consistent statistic reports" * Bug #37657: Command failed on smithi075 with status 1: 'sudo yum install -y kernel' * Subtask #37667: mgr/dashboard: Pools menu E2E breadcrumb and tab tests * Bug #37670: standby-replay MDS spews message to log every second * Cleanup #37674: mds: create separate config for heartbeat timeout * Feature #37678: mds: log new client sessions with various metadata * Feature #37683: mgr/dashboard: "OSD Recovery Priority" fallback values * Bug #37721: mds crashes frequently when using snapshots in CephFS on mimic * Feature #37722: mgr: Allow modules to get/set other module options * Bug #37723: mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats * Bug #37724: MDSMonitor: ignores stopping MDS that was formerly laggy * Bug #37767: librgw crash due to local variables deallocated * Bug #37787: BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fuse.ceph: #!/usr/bin/env python. Change it to python3 (or python2) explicitly. * Feature #37794: mgr/dashboard: CRUSH map viewer RFE * Bug #37807: osd: valgrind catches InvalidRead * Bug #37809: mgr/dashboard: dashboard shows 500 error if grafana is not configured * Bug #37836: qa: test_damage performs truncate test on same object repeatedly * Bug #37837: qa: test_damage expectations wrong for Truncate on some objects * Bug #37841: mgr/dashboard: RbdMirroringService test suite fails in dev mode * Bug #37843: mgr/dashboard: opening a config option in the editor is failing * Bug #37854: Cluster Hosts List - Performance Details context not changing based on Hosts List row selection. * Bug #37859: mgr/dashboard: Render all objects in KV-table * Feature #37860: mgr/dashboard: Hide empty fields in KV-table * Bug #37862: mgr/dashboard: Confusing tilted time stamps in the CephFS performance graph * Cleanup #37864: client: use Message smart ptr to manage Message lifetime * Bug #37867: mgr/dashboard: incorporate RBD prometheus stats in grafana dashboard * Bug #37881: osd/OSDMap: calc_pg_upmaps - potential access violation * Bug #37914: bluestore: segmentation fault * Bug #37915: osd: Segmentation fault in OpRequest::_unregistered * Cleanup #37916: mgr/dashboard: Inconsistent formatting of the cluster and audit logs * Bug #37917: SSO: Raw error message shown when logged in with non existent ceph user * Bug #37929: MDSMonitor: missing osdmon writeable check * Bug #37932: Throttle.cc: 194: FAILED assert(c >= 0) due to invalid ceph_osd_op union * Feature #37934: mgr/dashboard: Configure all mgr modules in UI * Bug #37944: qa: test_damage needs to silence MDS_READ_ONLY * Cleanup #37954: ceph: cleanup status output for CephFS file systems, especially for multifs * Bug #37955: ceph.in: "no valid command found" suggests obsolete commands * Bug #37956: qa/workunits/cephtool/test.sh:1124: test_mon_mds: ceph mds stat fails * Bug #37964: mgr/dashboard: Test failure: test_invalid_user_id (tasks.mgr.dashboard.test_rgw.RgwApiCredentialsTest) * Bug #38004: mgr/dashboard: Render error in pool edit dialog * Bug #38009: client: session flush does not cause cap release message flush * Bug #38010: mds: cache drop should trim cache before flushing journal * Bug #38020: mds: remove cache drop admin socket command * Feature #38022: mds: provide a limit for the maximum number of caps a client may have * Bug #38043: mds: optimize revoking stale caps * Bug #38047: mgr: segfault, dashboard, PyModule::get_typed_option_value > PyErr_Restore * Subtask #38050: mgr/dashboard: Additional Cluster menu E2E breadcrumb and tab tests * Bug #38054: mds: broadcast quota message to client when disable quota * Bug #38055: rgw: GET/HEAD and PUT operations on buckets w/lifecycle expiration configured do not return x-amz-expiration header * Cleanup #38086: mgr/dashboard: Mobile friendly navigation * Bug #38087: mds: blacklists all clients on eviction * Bug #38113: mgr: segfault, mgr-fin, ActivePyModule::config_notify() > PyObject_Realloc > PyObject_Malloc: "mgr notify AttributeError: 'Module' object has no attribute 'True'" * Bug #38114: mgr: segfault, mgr-fin, ActivePyModule::notify > PyObject_Malloc * Bug #38122: Error ceph fs status * Bug #38128: msgr: unexpected "handle_cephx_auth got bad authorizer, auth_reply_len=0" * Bug #38134: rgw: `radosgw-admin bucket rm ... --purge-objects` can hang... * Bug #38137: mds: may leak gather during cache drop * Bug #38144: nautilus: 14.0.1 build fails in fedora rawhide mass rebuild w/ gcc/g++ 9 * Subtask #38149: mgr/dashboard: Block menu E2E tab tests * Bug #38158: mgr: segfault in PGMapDigest::print_summary() * Feature #38171: rgw: when exclusive lock fails due existing lock, log add'l info * Bug #38223: mgr/dashboard should use the orchestrator_cli's backend config to know the orchestrator * Bug #38252: librbd: leak in Journal::start_append() * Bug #38254: mgr/orch: doc: Fix Ceph docs build error: failed to import method u'Orchestrator.remote_host' * Bug #38263: mds: fix potential re-evaluate stray dentry in _unlink_local_finish * Bug #38267: mgr/dashboard: Module dashboard.services.ganesha has several lint issues * Bug #38270: kcephfs TestClientLimits.test_client_pin fails with "client caps fell below min" * Bug #38285: MDCache::finish_snaprealm_reconnect() create and drop MClientSnap message * Bug #38290: py2/py3 incompatibilities with set() * Bug #38291: c-v raises TypeError: unsupported format string passed to Size.__format__ * Bug #38297: ceph-mgr/volumes: fs subvolumes not created within specified fs volumes * Bug #38299: py2/py3 incompatibilities in util/templates.py * Bug #38313: Toast notifications hiding part of utility menu * Bug #38321: doc: mgr: align Ceph-Mgr module vs. plug-in naming * Bug #38324: mds: decoded LogEvent may leak during shutdown * Bug #38329: OSD crashes in get_str_map while creating with ceph-volume * Subtask #38343: mgr/dashboard: Filesystem menu E2E breadcrumb tests * Bug #38347: ceph.in: cephfs status line may not show active ranks when standby-replay present * Bug #38348: mds: drop cache does not timeout as expected * Bug #38380: tasks.mgr.test_module_selftest.TestModuleSelftest fails, dashboard/server_port not string * Bug #38382: mgr/dashboard: Placement group counter skips values when incrementing it via the cursor keys * Bug #38384: ceph_mon --help mon broken * Bug #38408: rgw: marker is not advanced during garbage collection * Bug #38425: mon: segmentation fault in AuthMonitor::create_pending * Bug #38454: rgw: gc entries with zero-length chains are not cleaned up * Feature #38462: Store comments to config options stored in monitors (i.e. ceph config dump) * Bug #38472: ceph-volume lvm batch raises KeyError when wal-devices is defined * Bug #38473: Except ceph-volume lvm batch to handle explicit --wal-devices and --db-devices * Documentation #38481: doc: Replace "plugin" with "module" in the Mgr docs * Bug #38486: rgw: abort multipart in lifecycle when enable index shard * Bug #38487: qa: "Loading libcephfs-jni: Failure!" * Bug #38491: "log [WRN] : Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" * Bug #38493: msg/async: connection race + winner fault can leave connection stuck at replacing forever * Bug #38495: mgr/dashboard: API view is showing 'No API definition provided.' * Bug #38518: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients cannot unmount * Bug #38520: qa: fsstress with valgrind may timeout * Bug #38524: AsyncConnection: segmentation fault * Bug #38528: mgr/dashboard: Unable to disable SSL support * Bug #38560: mgr: get_localized_module_option function is broken * Bug #38597: fs: "log [WRN] : failed to reconnect caps for missing inodes" * Bug #38601: the dashboard iscsi throw ERROR 500 * Bug #38620: ceph-volume raises 'local variable db_vg' referenced before assignment * Bug #38634: mgr/dashboard create new iscsi target disk failed. * Bug #38651: qa: powercycle suite reports MDS_SLOW_METADATA_IO * Bug #38657: mgr/dashboard: "client recovery" in landing page should be just "recovery" * Bug #38676: qa: src/common/Thread.cc: 157: FAILED ceph_assert(ret == 0) * Bug #38677: qa: kclient unmount hangs after file system goes down * Bug #38702: test_selftest_cluster_log (tasks.mgr.test_module_selftest.TestModuleSelftest) fails in vstart * Bug #38704: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cluster log" * Bug #38723: qa: tolerate longer heartbeat timeouts when using valgrind * Documentation #38728: doc: scrub administration docs need updated * Bug #38757: mgr/ssh orchestrator doesn't work * Bug #38767: upgrading ceph packages always triggers a service restart * Bug #38769: rgw: nfs: librgw/NFS fails due to missing service setup * Bug #38791: the response of rgw api content-type wrong * Bug #38941: Error when enabling mgr module 'restful'