# v15.2.8 * Backport #46091: octopus: mgr/dashboard: Table column dowpdown does not close * Backport #46342: octopus: mgr/dashboard: Remove useless tab in monitoring/alerts datatable details * Backport #46345: octopus: mgr/dashboard: Host delete action should be disabled if not managed by Orchestrator * Backport #46350: octopus: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixCompression/2 failed * Backport #46407: octopus: mgr/dashboard: Fix bugs in a unit test and i18n translation * Backport #46463: octopus: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors * Backport #46473: octopus: mds: make threshold for MDS_TRIM warning configurable * Backport #46479: octopus: mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..) * Backport #46516: octopus: client: directory inode can not call release_callback * Backport #46520: octopus: mds: deleting a large number of files in a directory causes the file system to read only * Backport #46522: octopus: mds: fix hang issue when accessing a file under a lost parent directory * Backport #46524: octopus: non-head batch requests may hold authpins and locks * Backport #46588: octopus: mgr: rgw doesn't show in service map * Backport #46610: octopus: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in LibCephFS.__cinit__ * Backport #46634: octopus: mds forwarding request 'no_available_op_found' * Backport #46636: octopus: mds: null pointer dereference in MDCache::finish_rollback * Backport #46637: octopus: mds: optimize ephemeral rand pin * Backport #46783: octopus: mds/CInode: Optimize only pinned by subtrees check * Backport #46786: octopus: client: in _open() the open ref maybe decreased twice, but only increases one time * Backport #46791: octopus: parent cache does not properly handle DNE objects * Backport #46855: octopus: client: static dirent for readdir is not thread-safe * Backport #46857: octopus: qa: add debugging for volumes plugin use of libcephfs * Backport #46859: octopus: mds: do not raise "client failing to respond to cap release" when client working set is reasonable * Backport #46933: octopus: mgr/dashboard: crushmap viewer is vertically compressed * Backport #46940: octopus: mds: memory leak during cache drop * Backport #46942: octopus: mds: segv in MDCache::wait_for_uncommitted_fragments * Backport #46947: octopus: qa: Fs cleanup fails with a traceback * Backport #46959: octopus: cephfs-journal-tool: incorrect read_offset after finding missing objects * Backport #46961: octopus: mgr/dashboard: add hint to notification badge when there are pending/unread notifications * Backport #46970: octopus: mgr/dashboard: Proper format iSCSI target portals * Backport #46972: octopus: mgr/dashboard: Hide table action input field if limit=0 * Backport #46982: octopus: make check: unittest_rbd_mirror (Child aborted): failed, despite all tests passed * Backport #46998: octopus: S3 API DELETE /{bucket}?encryption or DELETE /{bucket}?replication remove bucket * Backport #46999: octopus: mgr/dashboard: landing page 2.0 * Backport #47014: octopus: librados|libcephfs: use latest MonMap when creating from CephContext * Backport #47016: octopus: mds: fix the decode version * Backport #47018: octopus: mds: kcephfs parse dirfrag's ndist is always 0 * Backport #47021: octopus: client: shutdown race fails with status 141 * Backport #47037: octopus: rgw: Space usage accounting overestimated * Backport #47057: octopus: Decrease log level for bucket resharding * Backport #47080: octopus: mds: decoding of enum types on big-endian systems broken * Backport #47082: octopus: [rbd-mirror] test can still race and fail creation of peer * Backport #47083: octopus: mds: 'forward loop' when forward_all_requests_to_auth is set * Backport #47087: octopus: mds: recover files after normal session close * Backport #47089: octopus: After restarting an mds, its standy-replay mds remained in the "resolve" state * Backport #47091: octopus: mon: stuck osd_pgtemp message forwards * Backport #47099: octopus: [migration] using abort can result in the loss of data * Backport #47144: octopus: rgw:RGWLifecycleConfiguration::dump() can not dump transitions * Backport #47146: octopus: ceph-volume: backport the libstoragemagmt integration * Backport #47147: octopus: pybind/mgr/nfs: Test mounting of exports created with nfs export command * Backport #47151: octopus: pybind/mgr/volumes: add debugging for global lock * Backport #47192: octopus: mgr/dashboard: telemetry module throws error "list index out of range" * Backport #47195: octopus: Default value for 'bluestore_volume_selection_policy' is wrong * Backport #47197: octopus: mgr/dashboard: Disable autocomplete on user form * Backport #47199: octopus: mgr/dashboard: Datatable catches select events from other datatables * Backport #47237: octopus: ceph-volume inventory does not read mpath properly * Backport #47245: octopus: Add bucket name to bucket stats error logging * Backport #47247: octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log * Backport #47249: octopus: mon: deleting a CephFS and its pools causes MONs to crash * Backport #47251: octopus: add ability to clean_temps in osdmaptool * Backport #47253: octopus: mds: fix possible crash when the MDS is stopping * Backport #47255: octopus: client: Client::open() pass wrong cap mask to path_walk * Backport #47256: octopus: 'ceph-volume raw prepare' fails to prepare because ceph-osd cannot acquire lock on device * Backport #47258: octopus: Add pg count for pools in the `ceph df` command * Backport #47260: octopus: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn) * Backport #47282: octopus: Prometheus metrics contain stripped/incomplete ipv6 address * Backport #47284: octopus: journal size can't be overridden with --journal-size when using --journal-devices in lvm batch mode * Backport #47314: octopus: 'request failed: (13) Permission denied' from radosgw-admin period pull with --remote * Backport #47316: octopus: mds: CDir::_omap_commit(int): Assertion `committed_version == 0' failed. * Backport #47319: octopus: RGWObjVersionTracker does not track version over increments * Backport #47321: octopus: rgw: v4 signature not match when list objects with delimiter=" " * Backport #47346: octopus: mon/mon-last-epoch-clean.sh failure * Backport #47348: octopus: RGW returns 404 code for unauthorized instead of 401 * Backport #47349: octopus: mgr/dashboard: REST API returns 500 when no Content-Type is specified * Backport #47351: octopus: include/encoding: Fix encode/decode of float types on big-endian systems * Backport #47363: octopus: pgs inconsistent, union_shard_errors=missing * Backport #47409: octopus: avoid py3 format strings for now * Backport #47410: octopus: daemon may be missing in mgr service map * Backport #47412: octopus: rgw: create bucket via swift return 403 * Backport #47414: octopus: mgr/dashboard: increase Grafana iframe height to avoid scroll bar * Backport #47416: octopus: [udev] include image namespace in symlink path * Backport #47424: octopus: compressor: Make Zstandard compression level a configurable option * Backport #47426: octopus: zabbix config-show failed to indent output * Backport #47460: octopus: [test] rbd_snaps_ops will fail attempting to create pool * Backport #47461: octopus: mgr/dashboard: Update datatable only when necessary * Backport #47462: octopus: mgr plugins might endlessly loop when unregistering rados/cephfs client isntan * Backport #47466: octopus: AdminSocket::do_accept() terminate called after throwing an instance of 'std::out_of_range' * Backport #47503: octopus: fix simple activate when legacy osd * Backport #47531: octopus: /usr/bin/ceph IOError exception from stdout.flush * Backport #47539: octopus: mgr/dashboard: read-only modals * Backport #47545: octopus: add missing device health dependencies to rpm and deb * Backport #47547: octopus: mgr/dashboard: many-to-many matching not allowed: matching labels must be unique on one side * Backport #47559: octopus: mgr/dashboard: Its currently not possible to edit some parts of iSCSI target when a user is connected * Backport #47569: octopus: mgr/dashboard: table detail rows overflow * Backport #47574: octopus: krbd: optionally skip waiting for udev events * Backport #47576: octopus: systemd: Support Graceful Reboot for AIO Node * Backport #47599: octopus: qa/standalone/mon/mon-handle-forward.sh failure * Backport #47601: octopus: mgr/nfs: Cluster creation throws 'NoneType' object has no attribute 'replace' error in rook * Backport #47602: octopus: mgr/dashboard: enable per RBD graphs * Backport #47604: octopus: mds: purge_queue's _calculate_ops is inaccurate * Backport #47606: octopus: mgr/dashboard: non-administrator users can't login when telemetry notification is on * Backport #47607: octopus: mgr/dashboard/api: move/create OSD histogram in separate endpoint * Backport #47608: octopus: mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory * Backport #47619: octopus: mgr/dashboard: fix performance issue when listing large amounts of buckets * Backport #47621: octopus: mgr/dashboard: some nfs-ganesha endpoints are not in correct security scope * Backport #47623: octopus: various quota failures * Backport #47641: octopus: rbd: make common options override krbd-specific options * Backport #47644: octopus: cephadm RPM package installs /etc/sudoers.d/cephadm - review whether this file is still needed * Backport #47649: octopus: ceph-volume lvm batch race condition * Backport #47657: octopus: mgr/dashboard: Dashboard becomes unresponsive when SMART data not available * Backport #47658: octopus: mgr/dashboard: smartctl data shown not integrated in tabset * Backport #47668: octopus: Some structs aren't bound to mempools properly * Backport #47675: octopus: mgr/dashboard: cluster > manager modules * Backport #47687: octopus: rgw: FAIL: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest) * Backport #47695: octopus: mgr/dashboard: Copy to clipboard does not work in Firefox * Backport #47704: octopus: rbd-nbd: unmap ignores namespace when searching device by image spec * Backport #47706: octopus: ObjectCacher with read-ahead and overwrites might result in missed wake-up * Backport #47736: octopus: rgw_file: avoid long delay on shutdown * Backport #47739: octopus: mgr/devicehealth: device_health_metrics pool gets created even without any OSDs in the cluster * Backport #47747: octopus: mon: set session_timeout when adding to session_map * Backport #47762: octopus: Add compression stats by pool to the prometheus scrape * Backport #47770: octopus: mgr/dashboard: Issue a warning when a replicated pool is created with [min_]size == 1 * Backport #47792: octopus: mgr/dashboard: Add short descriptions to the telemetry report preview * Backport #47802: octopus: test/librados: endian bugs with checksum test cases * Backport #47811: octopus: mgr/dashboard: do not rely on realm_id value when retrieving zone info * Backport #47815: octopus: rgw: fix setting of namespace in ordered and unordered bucket listing * Backport #47817: octopus: rgw: allow rgw-orphan-list to note when rados objects in namespace * Backport #47819: octopus: rgw: radosgw-admin does not paginate internally when listing bucket * Backport #47822: octopus: mgr/dashboard: error when typing existing folder name in the NFS-Ganesha form * Backport #47824: octopus: pybind/mgr/volumes: Make number of cloner threads configurable * Backport #47826: octopus: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1 * Backport #47832: octopus: mgr/dashboard: error when creating an NFS export with CephFS path `/` * Backport #47836: octopus: Add OIDC provider support in RGW STS * Backport #47845: octopus: add no-systemd argument to zap * Backport #47850: octopus: rgw/rgw_file: incorrect lru object eviction in lookup_fh * Backport #47877: octopus: Create NFS Ganesha Cluster instructions are misleading * Backport #47886: octopus: [journal] object recorder can race while lock is temporarily release for callbacks * Backport #47888: octopus: [librbd] update AioCompletion return value before evaluating pending count * Backport #47889: octopus: [object-map] ignore missing object map on disable request * Backport #47891: octopus: mgr/nfs: Pseudo path prints wrong error message * Backport #47896: octopus: rgw: orphan list teuthology test uses `dnf`, which may not always be available * Backport #47898: octopus: mon stat prints plain text with -f json * Backport #47915: octopus: hammer packages not found on shaman * Backport #47934: octopus: tools/rados: `rados ls` with json output can result in out of memory error * Backport #47936: octopus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_session(Session*)' * Backport #47938: octopus: rgw: rgw-orphan-list should use "plain" formatted `rados ls` output * Backport #47940: octopus: mon/MDSMonitor: divide mds identifier and mds real name with dot * Backport #47942: octopus: octopus: client: hang after statfs * Backport #47943: octopus: mgr/dashboard: Merge disable and disableDesc table action methods * Backport #47944: octopus: mgr/dashboard: adapt NFS-Ganesha design change in Octopus (daemons -> services) * Backport #47954: octopus: vstart.sh: failed to run with multi active mds, when setting max_mds. * Backport #47955: octopus: list pending GCs is very slow * Backport #47956: octopus: GC perfcounter fails to update when deletion occurs * Backport #47958: octopus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cannot enter the "failed" state * Backport #47960: octopus: rgw lc expiration header returns although it should not * Backport #47962: octopus: beast frontend option to set the request_timeout_ms * Backport #47987: octopus: MonClient: mon_host with DNS Round Robin results in 'unable to parse addrs' * Backport #47989: octopus: cephfs client and nfs-ganesha have inconsistent reference count after release cache * Backport #47991: octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" * Backport #47994: octopus: nautilus: ObjectStore/SimpleCloneTest: invalid rm coll * Backport #48003: octopus: rgw: fix S3 API KeyCount incorrect return * Backport #48007: octopus: qa: rbd-nbd unmap test is still racy * Backport #48088: octopus: ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD * Backport #48184: octopus: ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NVMe devices * Backport #48186: octopus: the --log-level flag is not respected * Backport #48188: octopus: remove mention of dmcache from docs and help text * Backport #48226: octopus: mgr/dashboard: Use pipe instead of calling function within template * Backport #48228: octopus: Log "ceph health detail" periodically in cluster log * Backport #48303: octopus: ceph-volume lvm batch fails activating filestore dymcrypt osds * Backport #48304: octopus: prepare: the *-slots arguments have no effect * Backport #48343: octopus: Unable to disable SSO * Backport #48353: octopus: Fails to deploy osd in rook, throws index error * Backport #48366: octopus: libstoragemgmt calls fatally wound Areca RAID controllers on mira * Backport #48396: octopus: mgr/dashboard: Disable TLS 1.0 and 1.1 * Bug #48425: mgr/insights: ModuleNotFoundError: No module named 'six' * Bug #48432: test/azy-omap-stats fails to compile on alpine linux * Bug #48440: log [ERR] : scrub mismatch * Backport #48579: octopus: octopus: setting noscrub crashed osd process * Bug #48581: MON: global_init: error reading config file * Support #48630: non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7 * Backport #48637: octopus: pybind/ceph_volume_client: allows authorize on auth_ids not created through ceph_volume_client * Bug #48656: cephadm botched install of ceph-fuse (symbol lookup error) * Bug #48669: libec_isa.so with TEXTREL for ceph-v15.2.8 on arrch64 * Bug #48689: Irradict MGR behaviour after new cluster install * Bug #48728: ceph-immutable-object-cache stuck on overlay rbd snapshot with log image name * Support #48730: cleint read/write Rebalance from os problem * Bug #48784: Ceph-volume lvm batch fails with AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv' * Bug #48797: lvm batch calculates wrong extends * Bug #48870: cephadm: Several services in error status after upgrade to 15.2.8: unrecognized arguments: --filter-for-batch * Bug #48924: cephadm: upgrade process failed to pull target image: not enough values to unpack (expected 2, got 1) (podman 2.2.1 breakage) * Bug #48933: cephadm: EOFError: couldnt load message header, expected 9 bytes, got 0 * Bug #49013: cephadm: Service definition causes some container startups to fail * Bug #49076: cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting value: line 3217 column 25 (char 114688) * Bug #49158: doc: ceph-monstore-tools might create wrong monitor store * Bug #49166: All OSD down after docker upgrade: KernelDevice.cc: 999: FAILED ceph_assert(is_valid_io(off, len)) * Bug #49170: BlueFS might end-up with huge WAL files when upgrading OMAPs * Bug #49204: Ceph dashboard SAML2 - 415 Unsupported Media Type * Bug #49280: mds/orch: bare/short hostname as a number is not supported * Bug #49302: Huge amount of RGW crashes in the multisite setup with a backtrace * Support #49499: new osds created by orchestrator running different image version * Support #49549: createRole failed * Support #49639: Getting Error while writing Spark Dataframe to Ceph Storage using Spark 3.0.1 (Hadoop 3.2) / Spark 2.4.5 (Hadoop 2.7)