# v14.2.11 * Backport #40407: nautilus: mgr/dashboard: Add backend tests for RBD configuration * Backport #44470: nautilus: rgw: cls_bucket_list_(un)ordered should clear results collection * Backport #44487: nautilus: pybind/mgr/volumes: add upgrade testing * Backport #44506: nautilus: mgr/telemetry: force --license when sending while opted-out * Backport #44696: nautilus: mgr/dashboard: add popover list of Stand-by Managers & Metadata Servers (MDS) in landing page * Backport #44722: nautilus: rgw: add `rgw-orphan-list` tool & `radosgw-admin bucket radoslist ...` * Backport #45228: nautilus: mgr/alert: can't set inventory_cache_timeout/service_cache_timeout from CLI * Backport #45480: nautilus: Ceph v12.2.13 causes extreme high number of blocked operations * Backport #45488: nautilus: rgw: deprecate radosgw-admin orphans sub-commands * Backport #45558: nautilus: mgr/dashboard: Dashboard breaks on the selection of a bad pool * Backport #45677: nautilus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus) * Backport #45684: nautilus: Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true * Backport #45709: nautilus: mds: wrong link count under certain circumstance * Backport #45774: nautilus: vstart_runner: LocalFuseMount.mount should set set.mounted to True * Backport #45776: nautilus: build_incremental_map_msg missing incremental map while snaptrim or backfilling * Backport #45778: nautilus: notification: amqp with vhost and user/password is failing * Backport #45800: nautilus: Blacklist leads to potential rewatch live-lock loop * Backport #45837: nautilus: Monitoring: legends of throughput panel in RBD detail dashboard are not correct * Backport #45839: nautilus: mds may start to fragment dirfrag before rollback finishes * Backport #45843: nautilus: ceph-fuse: the -d option couldn't enable the debug mode in libfuse * Backport #45847: nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent * Backport #45852: nautilus: mds: scrub on directory with recently created files may fail to load backtraces and report damage * Backport #45854: nautilus: cephfs-journal-tool: NetHandler create_socket couldn't create socket * Backport #45883: nautilus: osd-scrub-repair.sh: SyntaxError: invalid syntax * Backport #45887: nautilus: client: fails to reconnect to MDS * Backport #45890: nautilus: osd: pg stuck in waitactingchange when new acting set doesn't change * Backport #45898: nautilus: mds: add config to require forward to auth MDS * Backport #45923: nautilus: radsgw-admin bucket list/stats does not list/stat all buckets if user owns more than 1000 buckets * Backport #45925: nautilus: Bucket quota not check in copy operation * Backport #45927: nautilus: rgw/ swift stat can hang * Backport #45930: nautilus: Add support wildcard subuser on bucket policy * Backport #45932: nautilus: Add user identity to OPA request * Backport #45945: nautilus: mgr/dashboard: backend test failure on tasks.mgr.dashboard.test_health.HealthTest * Backport #46002: nautilus: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot * Backport #46004: nautilus: rgw: bucket index entries marked rgw.none not accounted for correctly during reshard * Backport #46012: nautilus: qa: commit 9f6c764f10f break qa code in several places * Backport #46017: nautilus: ceph_test_rados_watch_notify hang * Backport #46019: nautilus: mgr/dashboard/rbd: throws 500s with format 1 RBD images * Backport #46090: nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0) * Backport #46116: nautilus: Add statfs output to ceph-objectstore-tool * Bug #46119: nautilus: s3-hadoop fails with hadoop 2.7.3 * Backport #46122: nautilus: mgr/k8sevents backport to sanitise the data coming from kubernetes * Backport #46164: nautilus: osd: make message cap option usable again * Backport #46172: nautilus: mgr/prometheus: cache ineffective when gathering data takes longer than 5 seconds * Backport #46184: nautilus: ceph config show does not display fsid correctly * Backport #46187: nautilus: client: fix snap directory atime * Backport #46189: nautilus: mds: EMetablob replay too long will cause mds restart * Backport #46191: nautilus: mds: cap revoking requests didn't success when the client doing reconnection * Backport #46198: nautilus: mgr/dashboard: the RBD configuration table has incorrect values in source column in non-default locales * Backport #46200: nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds" * Backport #46228: nautilus: Ceph Monitor heartbeat grace period does not reset. * Backport #46235: nautilus: pybind/mgr/volumes: volume deletion not always removes the associated osd pools * Backport #46250: nautilus: add encryption support to raw mode * Backport #46310: nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|/usr/libexec', ...] * Backport #46312: nautilus: mgr/dashboard: Prometheus query error while filtering values in the metrics of Pools and OSDs * Backport #46326: nautilus: [rgw] listing bucket via s3 hangs on "ordered bucket listing requires read #1" * Backport #46344: nautilus: rgw: orphan-list timestamp fix * Backport #46388: nautilus: pybind/mgr/volumes: cleanup stale connection hang * Backport #46393: nautilus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest) * Backport #46409: nautilus: client: supplying ceph_fsetxattr with no value unsets xattr * Bug #46424: [RGW]: avc denial observed for pid=13757 comm="radosgw" on starting RabbitMQ at port 5672 * Backport #46435: nautilus: mgr/dashboard: Unable to edit iSCSI target which has active session * Bug #46439: librgw: fhcache deadlock * Backport #46458: nautilus: [RGW]: avc denial observed for pid=13757 comm="radosgw" on starting RabbitMQ at port 5672 * Backport #46464: nautilus: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors * Backport #46466: nautilus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist if multiple volumes exist * Backport #46468: nautilus: rgw: radoslist incomplete multipart uploads fix marker progression * Backport #46470: nautilus: client: release the client_lock before copying data in read * Backport #46472: nautilus: crash on realm reload during shutdown * Backport #46474: nautilus: mds: make threshold for MDS_TRIM warning configurable * Backport #46476: nautilus: aws iam get-role-policy doesn't work * Backport #46478: nautilus: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete * Backport #46512: nautilus: rgw: lc: Segmentation Fault when the tag of the object was not found in the rule * Backport #46515: nautilus: mgr progress module causes needless load * Backport #46517: nautilus: client: directory inode can not call release_callback * Backport #46521: nautilus: mds: deleting a large number of files in a directory causes the file system to read only * Backport #46523: nautilus: mds: fix hang issue when accessing a file under a lost parent directory * Backport #46527: nautilus: mgr/volumes: `protect` and `clone` operation in a single transaction * Backport #46556: nautilus: rgw: rgw-orphan-list -- fix interaction, quoting, and percentage calc * Backport #46557: nautilus: rgw: orphan list teuthology test & fully-qualified domain issue * Backport #46598: luminous: Rescue procedure for extremely large bluefs log * Bug #46607: nautilus: pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str' * Backport #46635: nautilus: mds: null pointer dereference in MDCache::finish_rollback * Backport #46641: nautilus: qa: random subvolumegroup collision * Backport #46673: nautilus: importing rbd diff does not apply zero sequences correctly * Backport #46706: nautilus: Cancellation of on-going scrubs * Backport #46741: nautilus: ceph_osd crash in _committed_osd_maps when failed to encode first inc map * Documentation #46760: The default value of osd_op_queue is wpq since v11.0.0 * Bug #46763: nautilus: "ERROR: test_explicit_fail (tasks.mgr.test_failover.TestFailover)" in rados nautilus * Bug #46831: nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave * Backport #46856: nautilus: client: static dirent for readdir is not thread-safe * Backport #46858: nautilus: qa: add debugging for volumes plugin use of libcephfs * Backport #46860: nautilus: mds: do not raise "client failing to respond to cap release" when client working set is reasonable * Bug #47167: rgw: lifecycle: Days can not be 0 for Expiration rules * Bug #47211: nautilus: unrecognised rocksdb_option crashes osd process while starting the osd * Bug #47380: mon: slow ops due to osd_failure * Bug #47383: Multipart uploads fail when rgw_obj_stripe_size is configured to be larger than the default 4MiB * Bug #47392: radosgw not listening after installation * Bug #47562: RBD image is stuck after iSCSI disconnection * Bug #47919: list object versions returned multiple 'IsLatest true' entries