# v15.2.5 * Bug #43681: cephadm: Streamline RGW deployment * Bug #44252: cephadm: mgr,mds scale-down should prefer standby daemons * Bug #44458: octopus: mgr/dashboard: dropmenu item of column filters might exceed the viewport boundary * Feature #44548: cephadm: persist osd removal queue * Feature #44628: cephadm: Add initial firewall management to cephadm * Feature #44866: cephadm root mode: support non-root users + sudo * Bug #44877: mgr/dashboard: allow custom dashboard grafana url when set by cephadm * Feature #44886: cephadm: allow use of authenticated registry * Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint * Bug #45016: mgr: `ceph tell mgr mgr_status` hangs * Bug #45097: cephadm: UX: Traceback, if `orch host add mon1` fails. * Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons * Backport #45209: octopus: monitoring: alert for pool fill up broken * Bug #45252: cephadm: fail to insert modules when creating iSCSI targets * Feature #45263: osdspec/drivegroup: not enough filters to define layout * Cleanup #45321: Servcie spec: unify `spec:` vs omitting `spec:` * Backport #45426: octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed * Backport #45449: octopus: mgr/dashboard: The max. buckets field in RGW user form should be pre-filled * Backport #45475: octopus: qa: mgr/dashboard: Replace Telemetry module in REST API test * Bug #45594: cephadm: weight of a replaced OSD is 0 * Backport #45645: octopus: [rfe] rgw: parallelize single-node lifecycle processing * Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash. in keyring * Backport #45786: octopus: dashboard/rbd: Add button to copy the bootstrap token into the clipboard * Backport #45855: octopus: mgr/dashboard: Improve SummaryService's getCurrentSummary method * Documentation #45858: `ceph orch status` doesn't show in progress actions * Feature #45859: cephadm: use fixed versions * Bug #45872: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too useful for the user * Backport #45889: octopus: mgr/dashboard: Pool form max size * Backport #45913: octopus: rgw crashes while accessing an invalid iterator in gc update entry * Backport #45922: octopus: [rfe] rgw: add lifecycle perfcounters * Backport #45924: octopus: radsgw-admin bucket list/stats does not list/stat all buckets if user owns more than 1000 buckets * Backport #45926: octopus: Bucket quota not check in copy operation * Backport #45928: octopus: rgw/ swift stat can hang * Backport #45931: octopus: Add support wildcard subuser on bucket policy * Backport #45933: octopus: Add user identity to OPA request * Backport #45951: octopus: add access log line to the beast frontend * Backport #45953: octopus: vstart: Support deployment of ganesha daemon by cephadm with NFS option * Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail * Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGroup docs * Bug #45999: cephadm shell: picking up legacy_dir * Backport #46003: octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace commands * Backport #46005: octopus: rgw: bucket index entries marked rgw.none not accounted for correctly during reshard * Backport #46007: octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill().empty()) * Backport #46009: octopus: ObjectStore/StoreTestSpecificAUSize.ExcessiveFragmentation/2 failed * Backport #46015: octopus: log: the time precision of log is only milliseconds because the option log_coarse_timestamps doesn’t work well * Backport #46016: octopus: osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_out() (degraded outside margin) * Backport #46020: octopus: mgr/dashboard/rbd: throws 500s with format 1 RBD images * Bug #46036: cephadm: killmode=none: systemd units failed, but containers still running * Bug #46045: qa/tasks/cephadm: Module 'dashboard' is not enabled error * Backport #46048: octopus: mgr/dashboard: cropped actions menu in nested details * Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id: * Bug #46081: cephadm: mds permissions for osd are unnecessarily permissive * Backport #46085: octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh * Backport #46086: octopus: osd: wakeup all threads of shard rather than one thread * Backport #46087: octopus: [prometheus] auto-configure RBD metric exports for all RBD pools * Backport #46089: octopus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0) * Backport #46095: octopus: Issue health status warning if num_shards_repaired exceeds some threshold * Bug #46098: Exception adding host using cephadm * Backport #46106: octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in exporting subvolumes * Backport #46112: octopus: Report wrong rejected reason in inventory subcommand if device type is invalid * Backport #46115: octopus: Add statfs output to ceph-objectstore-tool * Backport #46117: octopus: "ActivePyModule.cc: 54: FAILED ceph_assert(pClassInstance != nullptr)" due to race when loading modules * Backport #46121: octopus: mgr/k8sevents backport to sanitise the data coming from kubernetes * Bug #46138: mgr/dashboard: Error creating iSCSI target * Backport #46148: octopus: functional tests: pass pv_devices to ansible * Backport #46150: octopus: [object-map] possible race condition when disabling object map with active IO * Backport #46152: octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubControls) fails intermittently * Backport #46155: octopus: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS) * Backport #46156: octopus: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS) * Backport #46165: octopus: osd: make message cap option usable again * Backport #46171: octopus: mgr/prometheus: cache ineffective when gathering data takes longer than 5 seconds * Backport #46173: octopus: mgr/dashboard Replace broken osd * Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id * Backport #46183: octopus: ceph config show does not display fsid correctly * Backport #46185: octopus: cephadm: mds permissions for osd are unnecessarily permissive * Backport #46186: octopus: client: fix snap directory atime * Backport #46188: octopus: mds: EMetablob replay too long will cause mds restart * Backport #46190: octopus: mds: cap revoking requests didn't success when the client doing reconnection * Backport #46193: octopus: BlueFS replay log grows without end * Backport #46197: octopus: mgr/dashboard: the RBD configuration table has incorrect values in source column in non-default locales * Backport #46199: octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds" * Backport #46201: octopus: mds: add ephemeral random and distributed export pins * Backport #46205: octopus: mgr/dashboard: telemetry module activation notification * Backport #46214: octopus: mgr/dashboard: Add host labels in UI * Backport #46229: octopus: Ceph Monitor heartbeat grace period does not reset. * Bug #46231: translate.to_ceph_volume: no need to pass the drive group * Bug #46233: cephadm: Add "--format" option to "ceph orch status" * Backport #46234: octopus: pybind/mgr/volumes: volume deletion not always removes the associated osd pools * Bug #46245: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately * Backport #46251: octopus: add encryption support to raw mode * Backport #46261: octopus: larger osd_scrub_max_preemptions values cause Floating point exception * Bug #46268: cephadm: orch apply -i: RGW service spec id might not contain a zone * Bug #46271: podman pull: transient "Error: error creating container storage: error creating read-write layer with ID" failure * Backport #46286: octopus: mon: log entry with garbage generated by bad memory access * Backport #46289: octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID * Backport #46290: octopus: mgr/nfs: Add interface for listing cluster * Backport #46291: octopus: mgr/volumes/nfs: Add interface for get and list exports * Backport #46292: octopus: mgr/nfs: Check cluster exists before creating exports and make exports persistent * Backport #46307: octopus: unittest_lockdep failure * Backport #46308: octopus: mgr/dashboard: Display check icon instead of true|false in various datatables * Backport #46309: octopus: TestMockImageReplayerSnapshotReplayer.UnlinkRemoteSnapshot race on shut down * Backport #46311: octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|/usr/libexec', ...] * Backport #46313: octopus: mgr/dashboard: Prometheus query error while filtering values in the metrics of Pools and OSDs * Backport #46314: octopus: mgr/dashboard: wal/db slots in create OSDs form do not work properly in firefox * Backport #46315: octopus: mgr/volumes: ephemerally pin volumes * Backport #46322: octopus: profile rbd does not allow the use of RBD_INFO * Backport #46328: octopus: mgr/dashboard: cdCopy2ClipboardButton does no longer support 'formatted' attribute * Bug #46329: cephadm: Dashboard's ganesha option is not correct if there are multiple NFS daemons * Bug #46332: boost::asio::async_write() does not return error when the remote endpoint is not connected * Backport #46340: octopus: [rgw] listing bucket via s3 hangs on "ordered bucket listing requires read #1" * Backport #46343: octopus: rgw: orphan-list timestamp fix * Backport #46348: octopus: qa/tasks: make sh() in vstart_runner.py identical with teuthology.orchestra.remote.sh * Backport #46351: octopus: mgr/dashboard: table details flicker if autoReload of table is on * Backport #46354: octopus: mgr/dashboard: Display users current bucket quota usage * Backport #46372: osd: expose osdspec_affinity to osd_metadata * Backport #46389: octopus: pybind/mgr/volumes: cleanup stale connection hang * Backport #46394: octopus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest) * Bug #46398: cephadm: can't use custom prometheus image * Backport #46401: octopus: mgr/nfs: Add interface to show cluster information * Backport #46402: octopus: client: recover from a killed session (w/ blacklist) * Backport #46408: octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR) * Backport #46410: octopus: client: supplying ceph_fsetxattr with no value unsets xattr * Backport #46418: octopus: mgr/dashboard: Password expiration notification is always shown if a date is set * Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2 * Backport #46436: octopus: mgr/dashboard: Unable to edit iSCSI target which has active session * Backport #46457: octopus: [RGW]: avc denial observed for pid=13757 comm="radosgw" on starting RabbitMQ at port 5672 * Backport #46459: octopus: rgw: orphan list teuthology test & fully-qualified domain issue * Backport #46460: octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing strings * Backport #46462: octopus: rgw: rgw-orphan-list -- fix interaction, quoting, and percentage calc * Backport #46465: octopus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist if multiple volumes exist * Backport #46467: octopus: rgw: radoslist incomplete multipart uploads fix marker progression * Backport #46469: octopus: client: release the client_lock before copying data in read * Backport #46471: octopus: crash on realm reload during shutdown * Backport #46475: octopus: aws iam get-role-policy doesn't work * Backport #46477: octopus: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete * Backport #46489: octopus: pybind/mgr/pg_autoscaler/module.py: do not update event if ev.pg_num== ev.pg_num_target * Backport #46498: octopus: mgr/nfs: Update nfs-ganesha package requirements * Bug #46502: octopus: mgr/dashboard: fix issue introduced by https://github.com/ceph/ceph/pull/35926. * Backport #46510: octopus: Adding data cache and CDN capabilities * Backport #46511: octopus: rgw: lc: Segmentation Fault when the tag of the object was not found in the rule * Backport #46514: octopus: mgr progress module causes needless load * Backport #46518: octopus: boost::asio::async_write() does not return error when the remote endpoint is not connected * Backport #46528: octopus: mgr/volumes: `protect` and `clone` operation in a single transaction * Bug #46534: cephadm podman pull: Digest did not match * Backport #46536: octopus: ceph_volume_client.py: python 3.8 compatibility * Bug #46540: cephadm: iSCSI gateways problems. * Bug #46560: cephadm: assigns invalid id to daemons * Bug #46566: octopus: mgr/dashboard: fix rbdmirroring dropdown menu * Backport #46570: octopus: mgr/dashboard: fix usage calculation to match "ceph df" way * Backport #46576: octopus: mgr/dashboard/api: CODEOWNERS * Backport #46584: octopus: os/bluestore: simplify Onode pin/unpin logic. * Backport #46585: octopus: mgr/nfs: Update about nfs ganesha cluster deployment using cephadm in vstart * Backport #46586: octopus: The default value of osd_scrub_during_recovery is false since v11.1.1 * Backport #46590: octopus: mgr/dashboard: Use same required field message accross the UI * Backport #46591: octopus: ceph-fuse: ceph-fuse process is terminated by the logratote task and what is more serious is that one Uninterruptible Sleep process will be produced * Backport #46593: octopus: [notifications] reading topic info for every op overloads the osd * Backport #46595: octopus: crash in Objecter and CRUSH map lookup * Backport #46599: octopus: Rescue procedure for extremely large bluefs log * Backport #46602: octopus: Fix broken UiApi documentation endpoints and add warning * Backport #46629: octopus: The bandwidth of bluestore was throttled * Backport #46631: octopus: mgr/nfs: Remove NParts and Cache_Size from MDCACHE block * Backport #46632: octopus: mgr/nfs: help for "nfs export create" and "nfs export delete" says "" where the documentation says "" * Backport #46639: octopus: [iscsi-target-cli page]: add systemctl commands for enabling and starting rbd-target-gw in addition to rbd-target-api * Backport #46640: octopus: Headers are missing in abort multipart upload response if bucket has lifecycle. * Backport #46642: octopus: qa: random subvolumegroup collision * Backport #46672: octopus: mgr/dashboard/api: reach 100% test coverage in API controllers * Backport #46674: octopus: importing rbd diff does not apply zero sequences correctly * Backport #46693: octopus: mgr/dashboard: Don't have two different unit test mechanics * Backport #46707: octopus: Cancellation of on-going scrubs * Backport #46709: octopus: Negative peer_num_objects crashes osd * Backport #46711: octopus: Object dispatch layers need to ensure all IO is complete prior to shut down * Backport #46712: octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just / * Backport #46715: octopus: Module 'diskprediction_local' has failed: Expected 2D array, got 1D array instead * Backport #46717: octopus: mgr/prometheus: log time it takes to collect metrics in debug mode * Backport #46719: octopus: [librbd]assert at Notifier::notify's aio_notify_locker * Backport #46721: octopus: tools: ceph-immutable-object-cache can start without root permission * Backport #46722: octopus: osd/osd-bench.sh 'tell osd.N bench' hang * Backport #46724: octopus: ceph-iscsi: selinux avc denial on rbd-target-api from ioctl access * Backport #46736: octopus: mgr/dashboard: cpu stats incorrectly displayed * Backport #46739: octopus: mon: expected_num_objects warning triggers on bluestore-only setups * Bug #46740: mgr/cephadm: restart of daemon reports host is empty * Backport #46742: octopus: ceph_osd crash in _committed_osd_maps when failed to encode first inc map * Bug #46748: Module 'cephadm' has failed: auth get failed: failed to find osd.32 in keyring retval: -2 * Backport #46751: octopus: mgr/dashboard: Add hosts page unit tests * Backport #46753: octopus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest) * Feature #46775: mgr/cephadm: Enhance AlertManagerSpec to allow adding additional webhook receiver URLs * Bug #46777: cephadm: Error bootstraping a cluster with '--registry-json' option * Backport #46785: octopus: add subcommand to parse drive_groups * Backport #46788: octopus: mgr/dashboard: Cluster status messages overflow in the landing page * Backport #46794: octopus: mgr/dashboard: ExpressionChangedAfterItHasBeenCheckedError in OSD delete form * Backport #46795: octopus: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of snapshot source directory tree * Backport #46798: octopus: The append operation will trigger the garbage collection mechanism * Bug #46808: prometheus stats reporting fails with "KeyError" * Bug #46813: `ceph orch * --refresh` is broken * Bug #46833: simple (ceph-disk style) OSDs adopted by cephadm must not call `ceph-volume lvm activate` * Backport #46873: octopus: rgw: lc: fix backdward-compat decode * Backport #46874: octopus: rgw lifecycle versioned encoding mismatch * Backport #46895: octopus: Fix API test timeout issues * Backport #46896: octopus: The backend test fails in tasks.mgr.dashboard.test_rbd.RbdTest.test_move_image_to_trash test * Backport #46907: octopus: mgr/dashboard: Extract documentation link to a component * Backport #46911: octopus: testing: flake8 uses py2 * Backport #46924: octopus: mgr/dashboard: Unable to edit iSCSI logged-in client * Backport #46929: octopus: rgw: http requests state should be set before unlink * Backport #46931: octopus: librados: add LIBRBD_SUPPORTS_GETADDRS support * Backport #46934: octopus: "No such file or directory" when exporting or importing a pool if locator key is specified * Backport #46936: octopus: prometheus stats reporting fails with "KeyError" * Backport #46938: octopus: UnboundLocalError: local variable 'ragweed_repo' referenced before assignment * Backport #46944: octopus: mgr/dashboard: host labels not shown after adding them. * Backport #46945: octopus: Global and pool-level config overrides require image refresh to apply * Backport #46949: octopus: OLH entries pending removal get mistakenly resharded to shard 0 * Backport #46951: octopus: nautilis client may hunt for mon very long if msg v2 is not enabled on mons * Backport #46953: octopus: invalid principal arn in bucket policy grants access to all * Backport #46955: octopus: multisite: RGWAsyncReadMDLogEntries crash on shutdown * Backport #46957: octopus: pybind/mgr/nfs: add interface for adding user defined configuration * Backport #46958: octopus: mgr/status: metadata is fetched async * Backport #46964: octopus: Pool stats increase after PG merged (PGMap::apply_incremental doesn't subtract stats correctly) * Backport #46966: octopus: rgw: GETing S3 website root with two slashes // crashes rgw * Backport #46968: octopus: rgw: break up user reset-stats into multiple cls ops * Backport #46974: octopus: mgr/dashboard: Strange iSCSI discovery auth behavior * Backport #46993: octopus: mgr/dashboard: remove password field if login is using SSO and fix error message in confirm password * Backport #46996: octopus: mgr/crash: invalid crash remove example * Backport #47001: octopus: mgr/dashboard/api: reduce verbosity in API tests log output * Backport #47022: octopus: rbd_write_zeroes() * Backport #47114: octopus: rgw: hold reloader using unique_ptr * Backport #47121: octopus: mgr/dashboard: replace endpoint of "This week" time range for Grafana in dashboard * Documentation #47130: Please add basic infos into the documentation * Backport #47155: octopus: mgr/dashboard: redirect to original URL after successful login * Tasks #47173: octopus 15.2.5 * Bug #47206: Ceph-mon crashes with zero exit code when no space left on device * Support #47233: cephadm: orch apply mon "label:osd" crashes cluster * Backport #47297: octopus: osdmaps aren't being cleaned up automatically on healthy cluster * Backport #47464: octopus: rgw:lc: fix (post-parallel) non-current expiration * Bug #47592: extract-monmap changes permission on some files * Bug #47655: AWS put-bucket-lifecycle command fails on the latest minor Octopus release * Bug #47868: rbd-target-api / one of two service crash * Bug #47870: Unable to install/setup Ceph Manager Dashboard * Bug #47871: radosgw does not properly handle a roleArn when executing assume-role operation * Bug #47912: Problems with Rados Gateway installation (CEPH) * Bug #47913: Problems with Rados Gateway installation (CEPH) * Bug #47997: mgr/dashboard: OSD disk performance statistics not working in grafana * Bug #48060: data loss in EC pool * Bug #48080: osd latency not showing data after applying label fix * Bug #48139: Ceph Dashboard Object Gateway InvalidRange bucket exception * Bug #48498: octopus: timeout when running the "ceph" command