# v14.2.3 * Backport #39499: nautilus: snapshot object maps can go inconsistent during copyup * Backport #39516: nautilus: osd-backfill-space.sh test failed in TEST_backfill_multi_partial() * Backport #39693: nautilus: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1, counting from 0) * Backport #39730: nautilus: rgw: allow radosgw-admin bucket list to use the --allow-unordered flag * Backport #39743: nautilus: mon: "FAILED assert(pending_finishers.empty())" when paxos restart * Backport #39749: nautilus: Add support for --bypass-gc flag of radosgw-admin bucket rm command in RGW Multi-site * Backport #40007: nautilus: rgw: fix prefix handling in LCFilter * Backport #40058: nautilus: mgr/dashboard: Only delete removed gateways * Backport #40107: nautilus: Librgw doesn't GC deleted object correctly * Backport #40125: nautilus: rgw:hadoop-s3a suite failing with more ansible errors * Backport #40129: nautilus: rgw: Swift interface: server side copy fails if object name contains `?` * Backport #40134: nautilus: rgw: putting X-Object-Manifest via TempURL should be prohibited * Backport #40137: nautilus: rgw: the Multi-Object Delete operation of S3 API wrongly handles the "Code" response element * Backport #40140: nautilus: document steps to disable metadata_heap on existing zones * Backport #40142: nautilus: multisite: 'radosgw-admin bucket sync status' should call syncs_from(source.name) instead of id * Backport #40150: nautilus: ceph-rgw: retrieve list of existing realms in local cluster from REST API. * Backport #40180: nautilus: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails * Backport #40216: nautilus: rgw_file: fix invalidation of top-level directories * Backport #40226: nautilus: rgw_file: include tenant when hashing bucket names * Backport #40231: nautilus: maybe_remove_pg_upmap can be super inefficient for large clusters * Backport #40235: nautilus: [CLI]rbd: get positional argument error when using --image * Backport #40237: nautilus: "profile rbd" OSD cap should add "class rbd metadata_list" cap by default. * Backport #40263: nautilus: rgw_file: all directories are virtual with respect to contents, and are always invalid * Backport #40265: nautilus: Setting noscrub causing extraneous deep scrubs * Backport #40267: nautilus: data race in OutputDataSocket * Backport #40272: nautilus: [rbd-mirror] ensure tcmalloc is used if available * Backport #40273: nautilus: mgr prometheus start failed * Backport #40274: nautilus: librados 'buffer::create' and related functions are not exported in C++ API * Backport #40276: nautilus: [object-map] resizing an image might result in an assert in 'ObjectMap::operator[]' * Backport #40279: nautilus: mgr/dashboard: Optimize the calculation of portal IPs * Backport #40281: nautilus: 50-100% iops lost due to bluefs_preextend_wal_files = false * Backport #40293: nautilus: rbd-nbd return correctly error message when no-match device. * Backport #40319: nautilus: "make: *** [hello_world_cpp] Error 127" in rados * Backport #40322: nautilus: nautilus with requrie_osd_release < nautilus cannot increase pg_num * Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using * Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen * Backport #40349: nautilus: rgw/OutputDataSocket: append_output(buffer::list&) says it will (but does not) discard output at data_max_backlog * Backport #40352: nautilus: multisite: RGWListBucketIndexesCR for data full sync needs pagination * Backport #40355: nautilus: The expected output of the "radosgw-admin reshard status" command is not documented * Backport #40358: nautilus: rgw: set null version object issues * Backport #40381: nautilus: [tests] "rbd" teuthology suite has no coverage of "rbd diff" * Backport #40382: nautilus: RuntimeError: expected MON_CLOCK_SKEW but got none * Backport #40438: nautilus: getattr on snap inode stuck * Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active * Backport #40441: nautilus: handle_read_frame_preamble_main crc mismatch for main preamble rx_crc=2646747243 tx_crc=2809446983 * Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .snap directory * Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps * Backport #40446: nautilus: mgr/dashboard: Update translation * Backport #40450: nautilus: s3tests-test-readwrite failed in rados run (Connection refused) * Backport #40462: nautilus: possible crash when replaying journal with invalid/corrupted ranges * Backport #40465: nautilus: osd beacon sometimes has empty pg list * Backport #40498: nautilus: Object Gateway multisite document read-only argument error * Backport #40501: nautilus: [cli] 'export' should handle concurrent IO completions * Backport #40505: nautilus: rgw: fix miss get ret in STSService::storeARN * Backport #40508: nautilus: rgw: conditionally allow builtin users with non-unique email addresses * Backport #40511: nautilus: [journal] tweak config defaults to improve small-IO performance * Backport #40512: nautilus: rgw: Put LC doesn't clear existing lifecycle * Backport #40515: nautilus: multisite: DELETE Bucket CORS is not forwarded to master zone * Backport #40518: nautilus: rgw: RGWGC add perfcounter retire counter * Backport #40536: nautilus: pool compression options not consistently applied * Backport #40537: nautilus: osd/PG.cc: 2410: FAILED ceph_assert(scrub_queued) * Backport #40540: nautilus: multisite: 'radosgw-admin bilog trim' stops after 1000 entries * Backport #40542: nautilus: ceph daemon mon.a config set mon_health_to_clog false cause leader mon assert * Backport #40543: nautilus: rgw: Policy should be url_decode when assume_role * Backport #40545: nautilus: rgw: fix rgw crash and set correct error code to nautilus * Backport #40546: nautilus: Keyrings created by ceph auth get are not suitable for ceph auth import * Backport #40572: nautilus: Disabling journal might result in assertion failure * Backport #40591: nautilus: rgw: deleting bucket can fail when it contains unfinished multipart uploads * Backport #40594: nautilus: rbd_mirror/ImageSyncThrottler.cc: 61: FAILED ceph_assert(m_queue.empty()) * Backport #40600: nautilus: rgw_file: rgw_readdir eof condition must take callback early termination in to account * Backport #40616: nautilus: mgr/dashboard: notify the user about unset 'mon_allow_pool_delete' flag beforehand * Backport #40625: nautilus: OSDs get killed by OOM due to a broken switch * Backport #40627: nautilus: rgw_file: directory expiration should respect nfs_rgw_namespace_expire_secs * Backport #40632: nautilus: High amount of Read I/O on BlueFS/DB when listing omap keys * Backport #40652: nautilus: os/bluestore: fix >2GB writes * Backport #40655: nautilus: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold * Backport #40656: nautilus: mgr/dashboard: Changing rgw-api-host does not get effective without disable/enable dashboard mgr module * Backport #40657: nautilus: mgr/dashboard: Display "logged in" information for each iSCSI client * Backport #40658: nautilus: mgr/dashboard: Pool graph/sparkline points do not display the correct values * Backport #40659: nautilus: mgr/dashboard: Interlock `fast-diff` and `object-map` * Backport #40661: nautilus: mgr/dashboard: cephfs multimds graphs stack together * Backport #40667: nautilus: PG scrub stamps reset to 0.000000 * Backport #40672: nautilus: USERNAME ldap token not replaced in rgw client * Backport #40675: nautilus: massive allocator dumps when unable to allocate space for bluefs * Backport #40685: nautilus: mgr/dashboard: Dentries value of MDS daemon in Filesystems page is inconsistent with "ceph fs stauts" output * Backport #40691: nautilus: Module 'dashboard' has failed: No module named routes * Backport #40699: nautilus: mgr/dashboard: Silence Alertmanager alerts * Backport #40710: nautilus: Document forward, readonly, and readforward cache modes * Backport #40723: nautilus: mgr/dashboard: Upgrade to ceph-iscsi config v10 * Backport #40730: nautilus: mon: auth mon isn't loading full KeyServerData after restart * Backport #40733: nautilus: mgr/dashboard: Fix npm vulnerabilities * Backport #40734: nautilus: mgr/diskprediction_cloud: Service unavailable * Backport #40737: nautilus: multisite: failover docs should use 'realm pull' instead of 'period pull' * Backport #40744: nautilus: core: lazy omap stat collection * Backport #40750: nautilus: build/ops: rpm: drop SuSEfirewall2 * Backport #40757: nautilus: stupid allocator might return extents with length = 0 * Backport #40760: nautilus: Save an unnecessary copy of RGWEnv * Backport #40768: nautilus: mgr/dashboard: change Github dep ng2-toastr to NPMJS * Backport #40786: nautilus: mgr/dashboard: SSL certificate upload command throws deprecation warning * Backport #40796: nautilus: mgr / volumes: support asynchronous subvolume deletes * Backport #40837: nautilus: Set concurrent max_background_compactions in rocksdb to 2 * Backport #40839: nautilus: cephfs-shell: TypeError in poutput * Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate() * Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm() * Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired * Backport #40846: nautilus: mgr/dashboard: controllers/grafana is not Python 3 compatible * Backport #40848: nautilus: segfault in RGWCopyObj::verify_permission() * Backport #40851: nautilus: multisite: radosgw-admin commands should not modify metadata on a non-master zone * Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0) * Backport #40882: nautilus: Reduce log level for cls/journal and cls/rbd expected errors * Backport #40885: nautilus: ceph mgr module ls -f plain crashes mon * Backport #40888: nautilus: The 'rbd migration' command workflow needs documentation * Backport #40901: nautilus: Mgr metdata required to be added prometheus exporter module * Backport #40904: nautilus: Influx module fails due to missing close() method * Backport #40921: nautilus: missing string substitution when reporting mounts * Bug #40931: Can't connect to my kubernetes pod * Support #40934: can't get connection to external cephfs frmo kubernetes pod * Backport #40940: nautilus: Update rocksdb to v6.1.2 * Backport #40942: nautilus: mon/OSDMonitor.cc: better error message about min_size * Backport #40945: nautilus: mgr/dashboard: RGW User quota validation is not working correctly * Backport #40948: nautilus: Better default value for osd_snap_trim_sleep * Backport #40982: nautilus: mgr/dashboard: Fix the table mouseenter event handling test * Backport #41002: nautilus:client: failed to drop dn and release caps causing mds stary stacking. * Bug #41007: bad debug/error message when monitor data fs has insufficient space * Backport #41021: nautilus: simple: when 'type' file is not present activate fails * Bug #41037: Containerized cluster failure due to osd_memory_target not being set to ratio of cgroup_limit per osd_memory_target_cgroup_limit_ratio * Backport #41038: cmake: update FindBoost.cmake * Bug #41052: nautilus: cbt cosbench workloads failing in rados/perf suite * Backport #41058: nautilus: ceph-volume does not recognize wal/db partitions created by ceph-disk * Backport #41070: nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command * Backport #41071: nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-root user * Bug #41075: nautilus: mgr/dashboard: ceph dashboard Jenkins job fails due to webdriver error: "session not created: Chrome version must be between 71 and 75" * Backport #41078: nautilus: MGR module for scheduling long-running background operations * Backport #41082: nautilus: batch gets confused when the same device is passed in two device lists * Backport #41084: nautilus: Change default for bluestore_fsck_on_mount_deep as false * Backport #41092: nautilus: rocksdb: enable rocksdb_rmrange=true by default and make delete range optional on number of keys * Backport #41137: nautilus: ceph-volume prints errors to stdout with --format json * Bug #41195: [msg/simple] in_seq_ack in not reset to zero when pipe session is reset, as a result, messages may not be released in sent queue * Bug #41198: Resolved a problem where too many requests on an object caused OSD processing suicide over time * Backport #41203: nautilus: ceph-volume prints log messages to stdout * Backport #41248: nautilus: simple functional tests test for lvm zap * Backport #41263: nautilus: rgw_file: advance_mtime() takes RGWFileHandle::mutex unconditionally * Backport #41273: nautilus: Containerized cluster failure due to osd_memory_target not being set to ratio of cgroup_limit per osd_memory_target_cgroup_limit_ratio * Backport #41275: nautilus: [rbd_support] re-used image names might result in tasks not being scheduled * Backport #41299: nautilus: batch functional idempotency test fails since message is now on stderr * Backport #41308: nautilus: regression: [filestore,bluestore] single type strategies fail after tracking devices as sets * Bug #41435: Add mgr module for kubernetes event integration * Backport #41455: nautilus: osd: fix ceph_assert(mem_avail >= 0) caused by the unset cgroup memory limit * Backport #41475: nautilus: [upgrade] mimic -> latest can result in 'rbd_support' failing to load * Backport #41521: nautilus: INFO:teuthology.orchestra.run.ovh044.stdout:you must add a [grafana-server] group and add at least one node * Bug #41562: nautilus: mgr/dashboard: landing page Refresh component adds a whitespace area at right-side * Backport #41569: nautilus: crash in io_context thread when lots of connections abort * Backport #41614: nautilus: ceph-volume lvm list is O(n^2) * Bug #41832: Different pools count in ceph -s and ceph osd pool ls * Bug #41867: mgr: replace nsec counters with second with floating precision * Support #42584: MGR error: auth: could not find secret_id= * Bug #42661: kernel panic not syncing Fatal exception * Bug #42669: multi-part upload will lost data