# v15.2.4 * Documentation #44284: cephadm: provide a way to modify the initial crushmap * Feature #44625: cephadm: test dmcrypt * Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip" * Backport #44986: octopus: mgr: prometheus Segmentation fault * Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)` * Backport #45074: octopus: SElinux denials observed on teuthology multisite run * Bug #45129: simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot * Feature #45163: cephadm: iscsi: read and write config-key for the dashboard * Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi' * Backport #45232: octopus: mgr/dashboard: Allow expanding/collapsing the data table * Bug #45245: cephadm: print iscsi container's log to stdout/stderr * Bug #45249: cephadm: fail to apply a iSCSI ServiceSpec * Backport #45251: octopus: "ceph fs status" command outputs to stderr instead of stdout when json formatting is passed * Bug #45284: cephadm: Access host files on "cephadm shell" * Bug #45293: cephadm: service_id can contain a '.' char (mds, nfs, iscsi) * Bug #45294: cephdam: rgw realm/zone could contain 'hostname' * Backport #45315: octopus: mgr/dashboard: Replace Protractor with Cypress * Backport #45324: octopus: mgr/dashboard: monitoring menu entry should indicate firing alerts * Backport #45334: octopus: mgr/dashboard: Provide an better workflow to "opt in" to enabling the telemetry mgr plugin * Backport #45346: octopus: mgr/dashboard: Async unique username validation * Backport #45357: octopus: rados: Sharded OpWQ drops suicide_grace after waiting for work * Backport #45362: octopus: mgr/dashboard: Filtering table throws error if data is undefined * Backport #45364: octopus: qa: rbd-nbd unmap_device may exit earlier due to incorrect list-mapped filter * Backport #45366: octopus: object size is sent as zero in some notifications * Backport #45368: octopus: mgr/dashboard: table detail of Services is not displayed * Backport #45369: octopus: Monitoring: Grafana Dashboard per rbd image * Backport #45370: octopus: mgr/dashboard: Automatic preselection of failure domains in erasure code profile from * Documentation #45377: mgr/dashboard: document Prometheus' security model * Bug #45393: Containerized osd config must be updated when adding/removing mons * Bug #45394: cephadm: fail to create/preview OSDs via drive group * Bug #45407: cephadm: Speed up OSD deployment preview * Documentation #45411: cephadm: add section about container images * Bug #45417: cephadm: nfs grace remove killed before completion * Bug #45427: cephadm: auth get failed: invalid entity_auth mon * Backport #45429: octopus: mgr/dashboard: Add a troubleshooting section to the dashboard documentation * Bug #45458: non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml * Backport #45460: octopus: mgr/test_orchestrator: fix _get_ceph_daemons() * Feature #45463: cephadm: allow custom images for grafana, prometheus, alertmanager and node_exporter * Backport #45468: octopus: mgr/dashboard: monitoring: Fix "10% OSDs down" alert description * Backport #45470: octopus: mgr/dashboard: When loading Block Mirroring page it should only do 1 request * Backport #45471: octopus: qa/rgw: fix issue error in tests_ps.py * Backport #45473: octopus: some obsolete "ceph mds" sub commands are suggested by bash completion * Backport #45476: octopus: cephfs-shell: CI testing does not detect flake8 errors * Backport #45477: octopus: fix MClientCaps::FLAG_SYNC in check_caps * Backport #45479: octopus: Ceph v12.2.13 causes extreme high number of blocked operations * Backport #45481: octopus: Add support DG_AFFINITY env var parsing. * Backport #45484: octopus: RGW tries to cache and access anonymous user * Backport #45485: octopus: infinite loop in 'radosgw-admin datalog list' * Backport #45487: octopus: rgw: deprecate radosgw-admin orphans sub-commands * Backport #45489: octopus: rgw: add `rgw-orphan-list` tool & `radosgw-admin bucket radoslist ...` * Backport #45492: octopus: rgw: fix bug where bucket listing end marker not always set correctly * Backport #45495: octopus: client: fuse mount will print call trace with incorrect options * Backport #45498: octopus: rgw: some list buckets handle leak * Backport #45500: octopus: RGW check object exists before auth? * Backport #45539: octopus: mgr/dashboard: HomeTest fails if there is no real dist folder * Backport #45541: octopus: mgr/dashboard: E2E: Timed out retrying: Expected to find content: 'rq' within the element: but never did. * Backport #45557: octopus: mgr/dashboard: Dashboard breaks on the selection of a bad pool * Bug #45560: cephadm: fail to create OSDs * Backport #45578: octopus: [librbd] The 'copy' method defaults to the source image format * Backport #45580: octopus: [python] Image create(...) method defaults to "old_format = True" * Backport #45585: octopus: ceph manpage lists obsolete/unsupported cache tier modes * Backport #45597: octopus: mgr/rbd_support: rename "rbd_trash_trash_purge_schedule" oid to "rbd_trash_purge_schedule" * Backport #45598: octopus: [librbd] failure in unit test due to race in disable mirroring class destruction * Backport #45601: octopus: mds: inode's xattr_map may reference a large memory. * Backport #45603: octopus: mds: PurgeQueue does not handle objecter errors * Bug #45617: mgr/orch: mds with explicit naming * Bug #45625: cephadm: when configuring monitoring with ceph orch, ceph dashboard is only partly configured * Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check` * Bug #45629: cephadm: Allow users to provide ssh keys during bootstrap * Bug #45632: nfs: auth credentials for recovery database include mds * Backport #45643: octopus: rgw/notifications: mission versionId in versioned buckets * Backport #45673: octopus: qa: powercycle: install task runs twice with double unwind causing fatal errors * Backport #45674: octopus: qa: TypeError: unsupported operand type(s) for +: 'range' and 'range' * Backport #45676: octopus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus) * Backport #45678: octopus: mds: layout parser does not handle [-.] in pool names * Backport #45680: octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume create command * Backport #45682: octopus: Large (>=2 GB) writes are incomplete when bluefs_buffered_io = true * Backport #45685: octopus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking * Backport #45688: octopus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL * Bug #45696: cephadm: Validate bootstrap dashboard "key" and "cert" file exists * Backport #45697: octopus: mgr/dashboard: replace hard-coded Telemetry URL * Backport #45704: octopus: cls/queue: queue return empty markers when listing entries * Backport #45707: octopus: mgr/dashboard: redirect page after changing the pwd * Backport #45708: octopus: mds: wrong link count under certain circumstance * Backport #45710: octopus: mgr/dashboard: Proposed Login Screen * Bug #45724: check-host should not fail using fqdn or not that hard * Backport #45727: octopus: mgr/dashboard: add grafana dashboards for rgw multisite sync info * Backport #45738: mgr/dashboard Proposed About modal box * Backport #45763: octopus: [rbd-mirror] image replayer stop might race with remove and instace replayer shut down * Backport #45773: octopus: vstart_runner: LocalFuseMount.mount should set set.mounted to True * Backport #45775: octopus: build_incremental_map_msg missing incremental map while snaptrim or backfilling * Backport #45777: octopus: notification: amqp with vhost and user/password is failing * Backport #45779: octopus: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus) * Backport #45782: octopus: amqp delivery guarantees are incorrect * Backport #45783: octopus: KeyError: 'ceph.type' * Backport #45787: octopus: Proposed About modal box * Backport #45799: octopus: librbd: make rbd_read_from_replica_policy actually work * Backport #45801: octopus: Blacklist leads to potential rewatch live-lock loop * Backport #45836: octopus: Monitoring: legends of throughput panel in RBD detail dashboard are not correct * Backport #45838: octopus: mds may start to fragment dirfrag before rollback finishes * Backport #45840: octopus: keystone [-] Unhandled error: pkg_resources.ContextualVersionConflict: (jsonschema 3.2.0 ... * Backport #45842: octopus: ceph-fuse: the -d option couldn't enable the debug mode in libfuse * Backport #45844: octopus: radosgw gc issue - failed to list objs: (22) Invalid argument * Backport #45845: octopus: ceph-fuse: building the source code failed with libfuse3.5 or higher versions * Backport #45846: octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent * Backport #45848: octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword argument 'mountpoint' * Backport #45849: octopus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces * Backport #45851: octopus: mds: scrub on directory with recently created files may fail to load backtraces and report damage * Bug #45861: data_devices: limit 3 deployed 6 osds per node * Backport #45880: octopus: ceph-osd: add osdspec-affinity flag * Backport #45881: octopus: [rbd-mirror] loss of lock might result in asserting failure when starting journal replay * Backport #45882: octopus: Objecter: don't attempt to read from non-primary on EC pools * Backport #45884: octopus: osd-scrub-repair.sh: SyntaxError: invalid syntax * Backport #45885: octopus: [rbd-mirror] ensure ops are canceled during replayer shut down before waiting * Backport #45886: octopus: qa: AssertionError: '1' != b'1' * Backport #45888: octopus: client: fails to reconnect to MDS * Backport #45895: octopus: mgr/dashboard: Reduce component's style size * Backport #45921: octopus: mgr/dashboard: extra spaces after services' name in the Cluster/Hosts page * Backport #45941: octopus: ceph-fuse build failure against libfuse v3.9.1 * Backport #45952: octopus: mgr/dashboard: Show labels in hosts page * Backport #46001: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot * Backport #46011: octopus: qa: TestExports is failure under new Python3 runtime * Backport #46013: octopus: qa: commit 9f6c764f10f break qa code in several places * Backport #46061: octopus: qa/tasks/cephadm: setup site based container registry * Backport #46111: octopus: mgr/dashboard: Fix chrome and chromium binaries verification * Backport #46136: mgr/dashboard: Different autocomplete input backgrounds in chrome and firefox * Bug #46267: unittest_lockdep failure * Bug #46295: RGW returns 404 code for unauthorized instead of 401 * Bug #46304: "ceph dashboard set-grafana-api-url" not being applied when using cephadm * Bug #46305: mgr/dashboard: `ceph dashboard set-grafana-api-url` not being applied * Bug #46330: Accessing as an invalid user will result in an infinite loop in getting a SessionKey. * Bug #46541: cephadm: OSD is marked as unmanaged in cephadm deployed cluster * Bug #46574: Cephfs with kernel client mtime stuck when multiple clients append to file * Bug #46578: Container for iscsi gateway does not have tcmu-runner running as service * Bug #46658: Ceph-OSD nautilus/octopus memory leak ? * Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied * Bug #46698: RadosGW - Swift + loadbalancer * Bug #47029: rdb qos don't work * Bug #47110: Ceph dashboard not working : rook-ceph-mgr-a pod : "OOM KILL" and "CrashLoopBackOff". * Support #47177: can not remove orch service (mgr) - Failed to remove service. was not found. * Bug #47188: mgr/dashboard: ceph dashboard does not display device health data, with error message "No SMART data available" * Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete() * Bug #47327: STS AssumeRole API get 400 response * Documentation #47436: Cluster monitor troubleshooting documentation outdated? * Support #47455: How to recover cluster that lost its quorum? * Bug #47534: disabling crash warnings