# v15.2.9 * Backport #46820: octopus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume * Backport #46963: octopus: mgr/dashboard: Create Ceph services via Orchestrator by using ServiceSpec * Backport #47059: octopus: mgr/volumes: Clone operation uses source subvolume root directory mode and uid/gid values for the clone, instead of sourcing it from the snapshot * Backport #47085: octopus: common: validate type CephBool cause 'invalid command json' * Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume * Backport #47158: octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve snapshot scalbility of subvolumes * Backport #47659: octopus: qa: error: error renaming temp state file /var/lib/logrotate/logrotate.status.tmp * Backport #47671: octopus: Hybrid allocator might cause duplicate admin socket command registration. * Backport #47708: octopus: Potential race condition regression around new OSD flock()s * Backport #47749: octopus: should use the new resolver when calling pip * Backport #47821: octopus: qa: AttributeError: 'list' object has no attribute 'stderr' * Backport #47892: octopus: Compressed blobs lack checksums * Backport #47963: octopus: cepham no longer requires apparmor-abstractions on SUSE * Backport #47975: octopus: mgr/dashboard: EC profile, clay plugin is missing * Backport #47996: octopus: monitoring: Use null yaxes min for OSD read latency * Backport #48079: octopus: mgr/dashboard: table items get selected when expanding details table * Backport #48084: octopus: rbd-nbd: the asokpath and log file are using the parent pid, which has exited * Backport #48085: octopus: [test] "rbd mirror pool peer remove" not using admin caps * Backport #48086: octopus: rbd mirror snapshot and trash purge schedulers store global schedule in localized config * Backport #48094: octopus: Hybrid allocator might segfault when fallback allocator is present * Backport #48096: octopus: mds: fix file recovery crash after replaying delayed requests * Backport #48098: octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty()) * Backport #48101: octopus: Admin API returns 200 instead of 404 for Get Bucket Info * Backport #48109: octopus: client: ::_read fails to advance pos at EOF checking * Backport #48111: octopus: doc: document MDS recall configurations * Backport #48127: octopus: Unnecessary bilogs are left in sync-disabled buckets * Backport #48129: octopus: some clients may return failure in the scenario where multiple clients create directories at the same time * Backport #48132: octopus: mgr/dashboard: user can change the cluster of a NFS-Ganesha export * Backport #48191: octopus: mds: throttle workloads which acquire caps faster than the client can release * Backport #48194: octopus: bufferlist c_str() sometimes clears assignment to mempool * Backport #48225: octopus: [librbd] removing pool config overrides does not cause config refresh * Backport #48239: octopus: list object versions returned multiple 'IsLatest true' entries * Backport #48243: octopus: collection_list_legacy: pg inconsistent * Backport #48266: octopus: mgr becomes unresponsive when the progress bar is shown * Backport #48279: octopus: mgr_test_case: skipTest is an instance method, not a class method * Backport #48281: octopus: osd: fix bluestore bitmap allocator * Backport #48283: octopus: /etc/sudoers.d/ceph-osd-smartctl file permissions don't conform to standards * Backport #48285: octopus: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive client * Backport #48341: octopus: [rbd_support] Attempting to background remove in-use image results in apparent stuck progress * Backport #48345: octopus: rgw: unnecessary payload is added at the end of the message * Backport #48370: octopus: mds: dir->mark_new should together with dir->mark_dirty * Backport #48372: octopus: client: dump which fs is used by client for multiple-fs * Backport #48375: octopus: libcephfs allows calling ftruncate on a file open read-only * Backport #48378: octopus: invalid values of crush-failure-domain should not be allowed while creating erasure-coded profile * Backport #48398: octopus: mgr/dashboard: display placement column in service table * Backport #48399: octopus: a few scrubs or remapped PGs blocks the upmap balancer * Backport #48401: octopus: mgr: don't update osd stat which is already out * Backport #48414: octopus: lvm/create.py: typo in the help message * Backport #48427: octopus: Put policy should return 204 instead of 200 * Backport #48429: octopus: rgw: expiration is triggered in advance because of an overflow problem * Backport #48456: octopus: ceph: reexpand the config meta just after the fork() is done * Backport #48458: octopus: client: fix crash when doing remount in none fuse case * Backport #48460: octopus: mgr/dashboard: make daemon selection easier in NFS export form * Backport #48470: octopus: rbd du performance regression * Backport #48474: octopus: mgr/dashboard: Allow modifying single OSD settings for noout/noscrub/nodeepscrub * Backport #48478: octopus: bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_assert(r == 0) * Backport #48480: octopus: PG::_delete_some isn't optimal iterating objects * Backport #48494: octopus: mgr/dashboard: "Orchestrator is not available" while toggling between options available in UI * Backport #48496: octopus: Paxos::restart() and Paxos::shutdown() can race leading to use-after-free on 'logger' object. Seen in Nautilus. * Backport #48511: octopus: AttributeError: module 'lib' has no attribute 'Cryptography_HAS_TLSEXT_HOSTNAME' * Backport #48515: octopus: mgr/dashboard: SSL Handshake: Update the inbuilt ssl providers error * Backport #48519: octopus: pybind: test_readlink() fails due to missing terminating NULL char * Backport #48521: octopus: client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs * Backport #48528: octopus: install-deps.sh fails with 'Error: No matching repo to modify: PowerTools' on Centos Stream 8 * Backport #48538: octopus: mgr/dashboard: test_standby* (tasks.mgr.test_dashboard.TestDashboard) failed locally * Backport #48539: octopus: mgr/dashboard: Service and Daemon's refresh interval is too long * Backport #48544: octopus: rgw_file: common_prefixes returned out of lexical order * Backport #48546: octopus: rgwlc: shard-index vector short by 1? * Backport #48551: octopus: Dashboard fails to load, internal server error in API * Backport #48557: octopus: mgr/restful: _gather_osds() mistakenly treats a `str` as a `dict` * Backport #48568: octopus: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch * Backport #48574: octopus: Module 'crash' has failed: dictionary changed size during iteration * Backport #48578: octopus: Logic error in default prom alert 'pool filling up' * Backport #48580: octopus: FAIL: test_osd_came_back (tasks.mgr.test_progress.TestProgress) * Backport #48587: octopus: mgr/dashboard: RGW User Form is validating disabled fields * Backport #48592: octopus: mgr/dashboard: Drop invalid RGW client instances, improve logging * Backport #48605: octopus: mgr/dashboard: CRUSH map viewer inconsistent with output of "ceph osd tree" * Backport #48607: octopus: mgr/dashboard: enable different URL for users of browser to Grafana * Backport #48610: octopus: [rbd-mirror] assertion failure when attempting to sync non-existent snapshot * Backport #48615: octopus: Audit log: mgr module passwords set on CLI written as plaintext in log files * Backport #48626: octopus: mgr/dashboard: Dashboard logs e2e tests are failing * Backport #48629: octopus: mgr/dashboard: The /rgw/status endpoint does not check for running service * Backport #48635: octopus: qa: tox failures * Backport #48642: octopus: Client: the directory's capacity will not be updated after write data into the directory * Backport #48644: octopus: client: ceph.dir.entries does not acquire necessary caps * Backport #48652: octopus: mgr/dashboard: Display a warning message in Dashboard when debug mode is enabled * Backport #48676: octopus: update krbd_stable_pages_required.sh to use stable_writes queue attribute * Bug #48681: Textrel in aarch64 libec_isa.so * Backport #48692: octopus: librbd::image::CreateRequest: validate_features: cannot use internally controlled features * Backport #48693: octopus: add sts token claims to ops log to be used for auditing * Backport #48714: octopus: Ceph-mgr Hangup and _check_auth_rotating possible clock skew, rotating keys expired way too early Errors * Backport #48725: octopus: radosgw-admin bucket limit check percentage warnings don't work * Backport #48737: octopus: orchestrator: query-daemon-health-metrics fails, no smartctl output * Backport #48739: octopus: CVE-2020-27839: mgr/dashboard: The ceph dashboard is vulnerable to XSS attacks * Backport #48743: octopus: S3 error: 404 (NoSuchBucket) due to distribute cache is not being invoked * Backport #48794: octopus: mgr/dashboard: REST API: security * Backport #48804: octopus: Infinite loop in old reset-stats * Backport #48809: octopus: mgr/dashboard: Client Read/Write donut chart is not correct * Backport #48828: octopus: No valid ELF RPATH or RUNPATH entry exists in the file (ceph-diff-sorted) * Backport #48864: octopus: RGW:Multisite: Verify if the synced object is identical to source * Backport #48888: octopus: master FTBFS with glibc 2.32 * Backport #48889: octopus: do_cmake: build fails on fedora-33 due to python version * Backport #48928: octopus: mgr/dashboard: can't log in when using the development server * Backport #48968: octopus: ocf:ceph:rbd resource agent does not support namespaces * Bug #48996: build failure on fedora-34/rawhide with boost 1.75 * Backport #49003: octopus: in-tree cram tarball broke down after 10 years of distinguished service * Backport #49010: octopus: krbd: add support for msgr2 (kernel 5.11) * Backport #49046: octopus: tasks.rgw_multi.tests.test_multipart_object_sync fails * Backport #49098: octopus: FAILED ceph_assert(o->pinned) in BlueStore::Collection::split_cache(BlueStore::Collection*) * Backport #49099: octopus: crash in BlueStore::Onode::put() * Bug #49433: mgr/dashboard: Grafana Error: matching labels must be unique on one side * Bug #49494: 15.2.9 breaks alpine compilation with https://github.com/ceph/ceph/pull/38951