# v15.2.14 * Backport #47093: octopus: RBD string-based config options are ignored * Backport #48424: octopus: Able to circumvent S3 Object Lock using deleteobjects command * Backport #48659: octopus: notification: radosgw-admin hangs on while closing * Backport #49513: octopus: client: allow looking up snapped inodes by inode number+snapid tuple * Backport #49745: octopus: Segmentation fault on GC with big value of rgw_gc_max_objs * Backport #49836: octopus: teuthology rgw qa/tasks/barbican.py * Backport #49981: octopus: BlueRocksEnv::GetChildren may pass trailing slashes to BlueFS readdir * Backport #50152: octopus: Reproduce https://tracker.ceph.com/issues/48417 * Backport #50167: octopus: Module 'diskprediction_local' takes forever to load * Backport #50283: octopus: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched * Backport #50288: octopus: MDS stuck at stopping when reducing max_mds * Backport #50302: octopus: rgw: radoslist incomplete multipart parts marker * Backport #50357: octopus: npm problem causes "make-dist" to fail when directory contains colon character * Backport #50380: octopus: ceph-16.2.0 builds have started failing in Fedora 35/rawhide w/ librabbitmq-0.11.0 * Backport #50383: octopus: DecayCounter: Expected: (std::abs(total-expected)/expected) < (0.01), actual: 0.0166296 vs 0.01 * Backport #50423: octopus: [feature] rgw send headers of quota settings * Backport #50425: octopus: Remove erroneous elements in hosts-overview Grafana dashboard * Backport #50464: octopus: per bucket notification object is never deleted * Backport #50598: octopus: Ceph-osd refuses to bind on an IP on the local loopback lo * Backport #50623: octopus: qa: "ls: cannot access 'lost+found': No such file or directory" * Backport #50635: octopus: session dump includes completed_requests twice, once as an integer and once as a list * Backport #50640: octopus: assumed-role: s3api head-object returns 403 Forbidden, even if role has ListBucket, for non-existent object * Backport #50643: octopus: rgw: allow rgw-orphan-list to process multiple data pools * Backport #50661: octopus: ceph: BrokenPipeError on ceph -h * Backport #50663: octopus: mgr/dashboard: remove NFSv3 support from dashboard * Backport #50677: octopus: Reproducible crash in radosgw (nautilus and later) * Backport #50705: octopus: _delete_some additional unexpected onode list * Backport #50709: octopus: rgw: fix bucket object listing when initial marker matches prefix * Backport #50714: octopus: Global config overrides do not apply to in-use images * Backport #50727: octopus: multisite: crash in RGWRESTStreamRWRequest::do_send_prepare() with empty url * Backport #50730: octopus: rgw_file: RGWLibFS::read success executed, but nodata readed * Backport #50750: octopus: max_misplaced was replaced by target_max_misplaced_ratio * Backport #50766: octopus: mgr/dashboard: bucket name constraints * Backport #50768: octopus: mgr/dashboard: RGW buckets async validator slow performance * Backport #50781: octopus: AvlAllocator.cc: 60: FAILED ceph_assert(size != 0) * Backport #50790: octopus: osd: write_trunc omitted to clear data digest * Backport #50796: octopus: mon: spawn loop after mon reinstalled * Backport #50861: octopus: add ceph-volume lvm [new-db|new-wal|migrate] commands * Backport #50874: octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free * Backport #50884: octopus: mgr/dashboard: Physical Device Performance grafana graphs for OSDs do not display * Backport #50895: octopus: ceph-volume lvm activate will consider /dev/root mounted directories as 'unmounted' and mount tmpfs on top of them * Backport #50898: octopus: mds: monclient: wait_auth_rotating timed out after 30 * Backport #50916: octopus: mds cpu_profiler asok_command crashes * Backport #50937: octopus: osd-bluefs-volume-ops.sh fails * Backport #50940: octopus: OSDs broken after nautilus->octopus upgrade: rocksdb Corruption: unknown WriteBatch tag * Backport #50941: octopus: rbd-mirror: segfault on snapshot replayer on shutdown * Backport #50960: octopus: mgr/dashboard: fix API docs link * Backport #50987: octopus: unaligned access to member variables of crush_work_bucket * Backport #50990: octopus: mon: slow ops due to osd_failure * Backport #50996: octopus: "test_notify.py" is timing out in upgrade-clients:client-upgrade-nautilus-pacific-pacific * Backport #51041: octopus: bluefs _allocate unable to allocate, though enough free * Backport #51047: octopus: qemu task fails to install packages, workload isn't run * Backport #51052: octopus: mgr/dashboard: partially deleted RBDs are only listed by CLI * Backport #51059: octopus: "trash purge" shouldn't stop at the first unremovable image * Backport #51065: octopus: mgr/dashboard: fix bucket objects and size calculations * Backport #51079: octopus: Bad error message on bucket chown * Backport #51093: octopus: mgr crash loop after increase pg_num * Backport #51128: octopus: In poweroff conditions BlueFS can create corrupted files * Backport #51139: octopus: radosgw-admin user create error message is confusing if user with supplied email address already exists * Backport #51142: octopus: directories with names starting with a non-ascii character disappear after reshard * Backport #51179: octopus: mgr/Dashboard: right Navigation should work on click when page width is less than768 px * Backport #51190: octopus: mgr/telemetry: pass leaderboard flag even w/o ident * Backport #51269: octopus: rados/perf: cosbench workloads hang forever * Backport #51314: octopus: osd:scrub skip some pg * Backport #51336: octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-op case * Backport #51367: octopus: [test] qa/workunits/rbd: use bionic version of qemu-iotests for focal * Bug #51443: mgr/dashboard: User database migration has been cut out * Backport #51448: octopus: mgr/dashboard: bucket name input takes spaces eventhough its not allowed * Bug #51450: mgr/dashboard: restore user database migration from v1 to v2 * Backport #51452: octopus: Add simultaneous scrubs to rados/thrash * Backport #51456: octopus: progress module: TypeError: '<' not supported between instances of 'str' and 'int' * Backport #51474: octopus: mgr/dashboard: User database migration has been cut out * Backport #51477: octopus: Incorrect OSD out count on landing page * Backport #51488: octopus: mgr/dashboard: run cephadm orch backend E2E tests * Backport #51494: octopus: pacific: pybind/ceph_volume_client: stat on empty string * Backport #51496: octopus: mgr spamming with repeated set pgp_num_actual while merging * Backport #51582: octopus: osd does not proactively remove leftover PGs * Backport #51650: octopus: Bluestore repair might erroneously remove SharedBlob entries. * Backport #51662: octopus: rados/test_envlibrados_for_rocksdb.sh: cmake: symbol lookup error: cmake: undefined symbol: archive_write_add_filter_zstd in centos * Backport #51678: octopus: Potential race condition in robust notify * Backport #51698: octopus: rgw: beast: lack of TLS settings * Backport #51711: octopus: compact db after bulk omap naming upgrade * Backport #51730: octopus: mgr/dashboard: Add configurable MOTD or wall notification * Backport #51769: octopus: ceph.spec: drop use of DISABLE_RESTART_ON_UPDATE (SUSE specific) * Backport #51837: octopus: ceph.spec: after being eliminated, FIRST_ARG crept back in * Backport #51841: octopus: osd: snaptrim logs to derr at every tick * Backport #51850: octopus: ci/tests: update ansible configuration * Backport #51939: octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 * Backport #51995: octopus: mgr/dashboard: cephadm-e2e script: improvements