# v14.2.23 * Backport #51129: nautilus: In poweroff conditions BlueFS can create corrupted files * Backport #51315: nautilus: osd:scrub skip some pg * Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string * Backport #51583: nautilus: osd does not proactively remove leftover PGs * Bug #51637: mgr/insights: mgr consumes excessive amounts of memory * Backport #51648: nautilus: Bluestore repair might erroneously remove SharedBlob entries. * Backport #51680: nautilus: Potential race condition in robust notify * Backport #51770: nautilus: ceph.spec: drop use of DISABLE_RESTART_ON_UPDATE (SUSE specific) * Backport #51950: nautilus: insights module can generate too much data, fail to put in config-key * Backport #51966: nautilus: set a non-zero default value for osd_client_message_cap * Bug #52340: ceph-volume: lvm activate: "tags" not defined * Backport #52987: nautilus: mgr/dashboard/api: set a UTF-8 locale when running pip * Bug #53136: The capacity used by the ceph cache layer pool exceeds target_max_bytes * Bug #53431: When using radosgw-admin to create a user, when the uid is empty, the error message is unreasonable * Feature #53455: [RFE] Ill-formatted JSON response from RGW * Bug #53668: Why not add a xxx.retry obJ to metadata synchronization at multisite for exception retries * Bug #53708: ceph multisite sync deleted unversioned object failed * Bug #53745: crash on null coroutine under RGWDataSyncShardCR::stop_spawned_services * Bug #54027: The file system takes a long time to build with iscsi disk of rbd * Bug #54189: multisite: metadata sync will skip first child of pos_to_prev * Bug #54254: when the remove-all parameter of rgw admin operation trim usage interface is set false, the usage is trimmed. * Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1 * Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd." command * Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update * Support #54621: ask for help about ceph mds offline * Bug #55131: radosgw crashes at RGWIndexCompletionManager::create_completion * Feature #55166: disable delte bucket from rgw * Bug #55424: ceph-mon process exit in dead status , which backtrace displayed has blocked by compact_queue_thread * Bug #55461: ceph osd crush swap-bucket {old_host} {new_host} where {old_host}={new_host} crashes monitors * Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device * Bug #56554: rgw::IAM::s3GetObjectTorrent never take effect * Bug #57221: ceph warn (important) * Bug #58105: `DeleteBucketPolicy` can not delete policy in slave zonegroup * Bug #58721: rgw_rename lead to librgw.so segment fault * Support #61596: how to secure delete rbd * Bug #61817: Ceph swift error: create container return 404; * Bug #63337: monmap's features are sometimes 0 * Bug #63429: librbd: mirror snapshot remove same snap_id twice * Bug #63804: mgr/restful module /request with body '{"prefix": "pg dump", "format": "json"}' fails with "access denied"