# v14.2.22 * Backport #43921: nautilus: radosgw abort caused by beast frontend coroutine stack overflow * Backport #45275: nautilus: [rbd-mirror] image replayer stop might race with remove and instace replayer shut down * Backport #45764: nautilus: [rbd-mirror] image replayer stop might race with remove and instace replayer shut down * Bug #45997: nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removing volume with UTF-8 directory * Backport #46149: nautilus: [object-map] possible race condition when disabling object map with active IO * Backport #46480: nautilus: mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..) * Backport #46589: nautilus: mgr: rgw doesn't show in service map * Backport #47020: nautilus: client: shutdown race fails with status 141 * Backport #48423: nautilus: Able to circumvent S3 Object Lock using deleteobjects command * Backport #48565: nautilus: "TestMigration.StressLive" fails * Backport #48650: nautilus: blkid holds old entries in cache * Backport #48713: nautilus: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired way too early Errors * Backport #48861: nautilus: mgr/dashboard: alert badge includes suppressed alerts * Backport #49084: nautilus: mgr/dashboard: missing root path of each session in Cephfs dashboard * Backport #49092: nautilus: http client - fix "Expect: 100-continue" issue * Backport #49187: nautilus: rgw: tooling to locate rgw objects with missing rados components * Backport #49195: nautilus: rgw: allow rgw-orphan-list to handle intermediate files w/ binary data * Backport #49299: nautilus: link to tcmalloc for ppc64le and s390x * Backport #49376: nautilus: building libcrc32 * Backport #49385: nautilus: BlueFS reads might improperly rebuild internal buffer under an shared lock * Backport #49471: nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings * Backport #49473: nautilus: nautilus: qa: "Assertion `cb_done' failed." * Backport #49514: nautilus: client: allow looking up snapped inodes by inode number+snapid tuple * Backport #49516: nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong * Backport #49519: nautilus: client: wake up the front pos waiter * Backport #49529: nautilus: "ceph osd crush set|reweight-subtree" commands do not set weight on device class subtree * Backport #49531: nautilus: osd ok-to-stop too conservative * Backport #49537: nautilus: Should rgw::auth::Strategy::apply be noexcept? * Backport #49562: nautilus: qa: file system deletion not complete because starter fs already destroyed * Backport #49567: nautilus: api_watch_notify: LibRadosWatchNotify.AioWatchDelete2 fails * Backport #49600: nautilus: mgr/dashboard: report fsid in cluster configuration * Backport #49613: nautilus: qa: racy session evicted check * Backport #49637: nautilus: mgr/telemetry: check if 'ident' channel is active before compiling reports * Backport #49640: nautilus: Disable and re-enable clog_to_monitors could trigger assertion * Backport #49656: nautilus: mgr/dashboard: test prometheus/alertmanager rules through promtool * Backport #49664: nautilus: 15.2.9 breaks alpine compilation with https://github.com/ceph/ceph/pull/38951 * Backport #49682: nautilus: OSD: shutdown of a OSD Host causes slow requests * Backport #49704: nautilus: mgr/dashboard: Documented dashboard instance ssl certificate functionality not implemented * Backport #49705: nautilus: mgr now includes mon metadata as part of osd metadata * Backport #49729: nautilus: debian ceph-common package post-inst clobbers ownership of cephadm log dirs * Backport #49731: nautilus: mgr: fix dump duplicate info for ceph osd df * Backport #49744: nautilus: Segmentation fault on GC with big value of rgw_gc_max_objs * Backport #49759: nautilus: mgr/balancer: KeyError messages in balancer module * Backport #49768: nautilus: [rbd] the "trash mv" operation should support an optional "--image-id" * Backport #49853: nautilus: mds: race of fetching large dirfrag * Backport #49875: nautilus: update krbd_blkroset.t for separate hw and user read-only flags * Backport #49903: nautilus: mgr/volumes: setuid and setgid file bits are not retained after a subvolume snapshot restore * Backport #49915: nautilus: Memory leak ceph-mon in ConfigMonitor::load_config * Backport #49919: nautilus: mon: slow ops due to osd_failure * Backport #49966: nautilus: BlueStore::_collection_list causes huge latency growth pg deletion * Backport #49977: nautilus: "make check" jenkins job fails * Backport #49991: nautilus: unittest_mempool.check_shard_select failed * Backport #50003: nautilus: ceph-libboost version conflicts * Bug #50006: nautilus: ERROR: test_osd_came_back (tasks.mgr.test_progress.TestProgress) * Backport #50026: nautilus: client: items pinned in cache preventing unmount * Backport #50050: nautilus: mgr/dashboard: Remove username, password fields from -Cluster/Manager Modules/dashboard,influx * Backport #50069: nautilus: mgr/dashboard: alert notification shows 'undefined' instead of alert message * Backport #50073: nautilus: make check / API tests fail to find Boost * Backport #50095: nautilus: When copying an encrypted object, the result object is empty. * Backport #50122: nautilus: cephContext: ceph-conf crash when CrushLocation construct * Backport #50125: nautilus: mon: Modify Paxos trim logic to be more efficient * Backport #50128: nautilus: pybind/mgr/volumes: deadlock on async job hangs finisher thread * Backport #50130: nautilus: monmaptool --create --add nodeA --clobber monmap aborts in entity_addr_t::set_port() * Backport #50144: nautilus: qa/tasks/vstart_runner.py: not starting max_required_mgrs * Backport #50153: nautilus: Reproduce https://tracker.ceph.com/issues/48417 * Bug #50155: nautilus: mgr/dashboard: python 2: error when setting user's non-ASCII password * Backport #50158: nautilus: test_datalog_autotrim fail in teuthology * Backport #50164: nautilus: master FTBFS on openSUSE Tumbleweed - no valid RPATH for $EXECUTABLE * Backport #50172: nautilus: mgr/dashboard: nodeenv can hang * Backport #50179: nautilus: client: only check pool permissions for regular files * Backport #50202: nautilus: mgr/dashboard: Read-only user can see registry password * Backport #50211: nautilus: BlueFS _flush_range coredump * Backport #50233: nautilus: rgw: success returned for put bucket versioning on a non existant bucket * Backport #50255: nautilus: mds: standby-replay only trims cache when it reaches the end of the replay log * Backport #50290: nautilus: MDS stuck at stopping when reducing max_mds * Backport #50300: nautilus: rgw: radoslist incomplete multipart parts marker * Support #50309: bluestore_min_alloc_size_hdd = 4096 * Backport #50356: nautilus: npm problem causes "make-dist" to fail when directory contains colon character * Backport #50366: nautilus: rgw: during reshard lock contention, adjust logging * Backport #50403: nautilus: Increase default value of bluestore_cache_trim_max_skip_pinned * Backport #50417: nautilus: mgr/dashboard: filesystem pool size should use stored stat instead of bytes_used * Backport #50426: nautilus: Remove erroneous elements in hosts-overview Grafana dashboard * Backport #50430: nautilus: Added caching for S3 credentials retrieved from keystone * Backport #50459: nautilus: ERROR: test_version (tasks.mgr.dashboard.test_api.VersionReqTest) mgr/dashboard: short_description * Backport #50481: nautilus: filestore: ENODATA error after directory split confuses transaction * Backport #50506: nautilus: mon/MonClient: reset authenticate_err in _reopen_session() * Bug #50533: osd: check_full_status: check don't cares about RocksDB size * Bug #50549: nautilus: os/bluestore: be more verbose in _open_super_meta by default * Backport #50600: nautilus: Ceph-osd refuses to bind on an IP on the local loopback lo * Backport #50603: nautilus: osd: check_full_status: check don't cares about RocksDB size * Backport #50625: nautilus: qa: "ls: cannot access 'lost+found': No such file or directory" * Backport #50628: nautilus: client: access(path, X_OK) on non-executable file as root always succeeds * Backport #50634: nautilus: mds: failure replaying journal (EMetaBlob) * Backport #50654: nautilus: mgr/dashboard: fix diverging behaviour between Dashboard ah Ceph iSCSI API * Bug #50692: nautilus: ERROR: test_rados.TestIoctx.test_service_daemon * Backport #50700: nautilus: fix 32-bit/64-bit server/client interoperability under msgr2 * Backport #50701: nautilus: Data loss propagation after backfill * Backport #50704: nautilus: _delete_some additional unexpected onode list * Backport #50777: nautilus: mgr/progress: progress can be negative * Backport #50780: nautilus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0) * Backport #50795: nautilus: mon: spawn loop after mon reinstalled * Backport #50836: nautilus: Fail to download large object when put it with swift api. * Backport #50841: nautilus: mgr/dashboard: add grafana dashboards for rgw multisite sync info * Backport #50860: nautilus: add ceph-volume lvm [new-db|new-wal|migrate] commands * Backport #50885: nautilus: mgr/dashboard: Physical Device Performance grafana graphs for OSDs do not display * Backport #50897: nautilus: mds: monclient: wait_auth_rotating timed out after 30 * Bug #50933: nautilus: qa: vstart_runner: TypeError: lstat: path should be string, bytes or os.PathLike, not NoneType * Backport #50936: nautilus: osd-bluefs-volume-ops.sh fails * Backport #50939: nautilus: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag * Backport #50961: nautilus: mgr/dashboard: fix API docs link * Backport #50988: nautilus: mon: slow ops due to osd_failure * Backport #50995: nautilus: "test_notify.py" is timing out in upgrade-clients:client-upgrade-nautilus-pacific-pacific * Backport #51042: nautilus: bluefs _allocate unable to allocate, though enough free * Backport #51048: nautilus: qemu task fails to install packages, workload isn't run * Backport #51054: nautilus: mgr/dashboard: partially deleted RBDs are only listed by CLI * Backport #51057: nautilus: "trash purge" shouldn't stop at the first unremovable image * Backport #51064: nautilus: mgr/dashboard: fix bucket objects and size calculations * Backport #51104: nautilus: batch ignores bluestore_block_db_size in ceph.conf * Backport #51107: nautilus: batch --report shows incorrect % of device when using --block-db-size * Backport #51144: nautilus: directories with names starting with a non-ascii character disappear after reshard * Backport #51189: nautilus: mgr/telemetry: pass leaderboard flag even w/o ident * Backport #51237: nautilus: rebuild-mondb hangs