# v14.2.12 * Backport #45954: nautilus: rgw: fail as expected when get/set-bucket-versioning attempted on a non-existent bucket * Backport #46088: nautilus: [prometheus] auto-configure RBD metric exports for all RBD pools * Backport #46096: nautilus: Issue health status warning if num_shards_repaired exceeds some threshold * Backport #46113: nautilus: Report wrong rejected reason in inventory subcommand if device type is invalid * Backport #46151: nautilus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubControls) fails intermittently * Backport #46262: nautilus: larger osd_scrub_max_preemptions values cause Floating point exception * Backport #46321: nautilus: profile rbd does not allow the use of RBD_INFO * Backport #46461: nautilus: pybind/mgr/balancer: should use "==" and "!=" for comparing strings * Backport #46519: nautilus: boost::asio::async_write() does not return error when the remote endpoint is not connected * Backport #46587: nautilus: The default value of osd_scrub_during_recovery is false since v11.1.1 * Backport #46592: nautilus: ceph-fuse: ceph-fuse process is terminated by the logratote task and what is more serious is that one Uninterruptible Sleep process will be produced * Backport #46594: nautilus: [notifications] reading topic info for every op overloads the osd * Backport #46633: nautilus: mds forwarding request 'no_available_op_found' * Backport #46638: nautilus: [iscsi-target-cli page]: add systemctl commands for enabling and starting rbd-target-gw in addition to rbd-target-api * Backport #46710: nautilus: Negative peer_num_objects crashes osd * Backport #46714: nautilus: Rescue procedure for extremely large bluefs log * Backport #46716: nautilus: Module 'diskprediction_local' has failed: Expected 2D array, got 1D array instead * Backport #46720: nautilus: [librbd]assert at Notifier::notify's aio_notify_locker * Backport #46725: nautilus: ceph-iscsi: selinux avc denial on rbd-target-api from ioctl access * Backport #46738: nautilus: mon: expected_num_objects warning triggers on bluestore-only setups * Backport #46752: nautilus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest) * Backport #46784: nautilus: mds/CInode: Optimize only pinned by subtrees check * Backport #46787: nautilus: client: in _open() the open ref maybe decreased twice, but only increases one time * Backport #46796: nautilus: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of snapshot source directory tree * Backport #46799: nautilus: The append operation will trigger the garbage collection mechanism * Backport #46821: nautilus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume * Backport #46913: nautilus: Fix API test timeout issues * Backport #46925: nautilus: mgr/dashboard: Unable to edit iSCSI logged-in client * Backport #46930: nautilus: rgw: http requests state should be set before unlink * Backport #46932: nautilus: librados: add LIBRBD_SUPPORTS_GETADDRS support * Backport #46935: nautilus: "No such file or directory" when exporting or importing a pool if locator key is specified * Backport #46937: nautilus: prometheus stats reporting fails with "KeyError" * Backport #46939: nautilus: UnboundLocalError: local variable 'ragweed_repo' referenced before assignment * Backport #46941: nautilus: mds: memory leak during cache drop * Backport #46943: nautilus: mds: segv in MDCache::wait_for_uncommitted_fragments * Backport #46946: nautilus: Global and pool-level config overrides require image refresh to apply * Backport #46948: nautilus: qa: Fs cleanup fails with a traceback * Backport #46950: nautilus: OLH entries pending removal get mistakenly resharded to shard 0 * Backport #46952: nautilus: nautilis client may hunt for mon very long if msg v2 is not enabled on mons * Backport #46954: nautilus: invalid principal arn in bucket policy grants access to all * Backport #46956: nautilus: multisite: RGWAsyncReadMDLogEntries crash on shutdown * Backport #46960: nautilus: cephfs-journal-tool: incorrect read_offset after finding missing objects * Backport #46965: nautilus: Pool stats increase after PG merged (PGMap::apply_incremental doesn't subtract stats correctly) * Backport #46967: nautilus: rgw: GETing S3 website root with two slashes // crashes rgw * Backport #46973: nautilus: mgr/dashboard: Hide table action input field if limit=0 * Backport #46981: nautilus: pybind/mgr/restful: use dict.items() for py3 compatible * Backport #46983: nautilus: make check: unittest_rbd_mirror (Child aborted): failed, despite all tests passed * Backport #47000: nautilus: mgr/dashboard/api: reduce verbosity in API tests log output * Backport #47013: nautilus: librados|libcephfs: use latest MonMap when creating from CephContext * Backport #47017: nautilus: mds: kcephfs parse dirfrag's ndist is always 0 * Backport #47023: nautilus: rbd_write_zeroes() * Backport #47042: nautilus: add access log line to the beast frontend * Backport #47056: nautilus: Decrease log level for bucket resharding * Backport #47058: nautilus: mgr/volumes: Clone operation uses source subvolume root directory mode and uid/gid values for the clone, instead of sourcing it from the snapshot * Backport #47070: nautilus: os/bluestore: dump onode that has too many spanning blobs * Backport #47081: nautilus: mds: decoding of enum types on big-endian systems broken * Backport #47088: nautilus: mds: recover files after normal session close * Backport #47090: nautilus: After restarting an mds, its standy-replay mds remained in the "resolve" state * Backport #47092: nautilus: mon: stuck osd_pgtemp message forwards * Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume * Backport #47100: nautilus: [migration] using abort can result in the loss of data * Backport #47115: nautilus: rgw: hold reloader using unique_ptr * Backport #47122: nautilus: mgr/dashboard: replace endpoint of "This week" time range for Grafana in dashboard * Backport #47152: nautilus: pybind/mgr/volumes: add debugging for global lock * Backport #47157: nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes * Backport #47178: nautilus: qa: after the cephfs qa test case quit the mountpoints still exist * Backport #47186: nautilus: rgw:RGWLifecycleConfiguration::dump() can not dump transitions * Backport #47193: nautilus: mgr/dashboard: telemetry module throws error "list index out of range" * Backport #47194: nautilus: Default value for 'bluestore_volume_selection_policy' is wrong * Backport #47213: nautilus: BlueFS volume selector assert * Backport #47228: nautilus: mgr/dashboard: document Prometheus' security model * Backport #47244: nautilus: Add bucket name to bucket stats error logging * Backport #47246: nautilus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log * Backport #47250: nautilus: add ability to clean_temps in osdmaptool * Backport #47252: nautilus: mds: fix possible crash when the MDS is stopping * Backport #47254: nautilus: client: Client::open() pass wrong cap mask to path_walk * Backport #47257: nautilus: Add pg count for pools in the `ceph df` command * Backport #47259: nautilus: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn) * Backport #47281: nautilus: Prometheus metrics contain stripped/incomplete ipv6 address * Backport #47283: nautilus: journal size can't be overridden with --journal-size when using --journal-devices in lvm batch mode * Backport #47296: nautilus: osdmaps aren't being cleaned up automatically on healthy cluster * Backport #47303: nautilus: mgr/dashboard: REST API returns 500 when no Content-Type is specified * Backport #47315: nautilus: 'request failed: (13) Permission denied' from radosgw-admin period pull with --remote * Backport #47317: nautilus: mds: CDir::_omap_commit(int): Assertion `committed_version == 0' failed. * Backport #47318: nautilus: rgw: lifecycle: Days can not be 0 for Expiration rules * Backport #47320: nautilus: RGWObjVersionTracker does not track version over increments * Backport #47322: nautilus: rgw: v4 signature not match when list objects with delimiter=" " * Backport #47345: nautilus: mon/mon-last-epoch-clean.sh failure * Backport #47347: nautilus: RGW returns 404 code for unauthorized instead of 401 * Backport #47350: nautilus: include/encoding: Fix encode/decode of float types on big-endian systems * Backport #47362: nautilus: pgs inconsistent, union_shard_errors=missing * Backport #47411: nautilus: daemon may be missing in mgr service map * Backport #47413: nautilus: rgw: create bucket via swift return 403 * Backport #47417: nautilus: [udev] include image namespace in symlink path * Backport #47425: nautilus: compressor: Make Zstandard compression level a configurable option * Bug #47435: nautilus: mgr/dashboard: Monitoring - All Alerts: The alerts still seem to be loading * Backport #47459: nautilus: [test] rbd_snaps_ops will fail attempting to create pool * Bug #47487: rgw: ordered bucket listing code clean-up * Backport #47504: nautilus: fix simple activate when legacy osd * Backport #47521: nautilus: unrecognised rocksdb_option crashes osd process while starting the osd * Backport #47532: nautilus: /usr/bin/ceph IOError exception from stdout.flush * Documentation #47535: Spdk deployment ceph osd documentation * Backport #47538: nautilus: mgr/dashboard: read-only modals * Backport #47544: nautilus: add missing device health dependencies to rpm and deb * Backport #47546: nautilus: mgr/dashboard: many-to-many matching not allowed: matching labels must be unique on one side * Backport #47558: nautilus: mgr/dashboard: Its currently not possible to edit some parts of iSCSI target when a user is connected * Backport #47570: nautilus: mgr/dashboard: table detail rows overflow * Backport #47573: nautilus: mgr/dashboard: cpu stats incorrectly displayed * Backport #47575: nautilus: krbd: optionally skip waiting for udev events * Backport #47577: nautilus: systemd: Support Graceful Reboot for AIO Node * Backport #47579: nautilus: mgr/dashboard: fix usage calculation to match "ceph df" way * Backport #47600: nautilus: qa/standalone/mon/mon-handle-forward.sh failure * Backport #47605: nautilus: mds: purge_queue's _calculate_ops is inaccurate * Backport #47618: nautilus: mgr/dashboard: fix performance issue when listing large amounts of buckets * Backport #47622: nautilus: various quota failures * Backport #47640: nautilus: rbd: make common options override krbd-specific options * Bug #47642: nautilus: qa/suites/{kcephfs, multimds}: client kernel "testing" builds for CentOS 7 are no longer available * Backport #47650: nautilus: ceph-volume lvm batch race condition * Support #47667: CEPH OS Not starting * Backport #47686: nautilus: rgw: FAIL: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest) * Backport #47717: nautilus: mgr/dashboard: Pool rename edit form does not return but the pool gets renamed * Backport #47737: nautilus: mgr/status: metadata is fetched async * Backport #47753: nautilus: mgr/dashboard: current frontend build workflow can cause e2e failures * Bug #47831: ceph-volume reject md-devices [rejected reason: Insufficient space <5GB] * Bug #47952: Replicated pool creation fails Nautilus 14.2.12 build when cluster runs with filestore OSDs * Bug #48001: Brocken SwiftAPI anonymous access * Bug #48021: severe rdb performance degradation with long running write-heavy work load * Bug #48025: osd start up failed when osd superblock crc fail * Bug #48241: librgw double read * Bug #48443: rocksdb: Corruption: missing start of fragmented record(2)