# v14.2.8 * Backport #41106: nautilus: mds: add command that modify session metadata * Backport #41853: nautilus: mds: reject sessionless messages * Backport #41865: nautilus: mds: ask idle client to trim more caps * Backport #42120: nautilus: pg_autoscaler should show a warning if pg_num isn't a power of two * Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands * Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap) * Backport #42650: nautilus: mds: no assert on frozen dir when scrub path * Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown * Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command * Backport #42800: nautilus: functional tests only assume correct number is osds if branch tests is mimic or luminous * Backport #42886: nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvolume group during creation * Backport #42898: nautilus: get_devices has unexpected return depending on file system location * Backport #42936: mgr/dashboard: Dashboard can't handle self-signed cert on Grafana API * Backport #42945: nautilus: prepare: allow raw block devices for wal/db/journal partitions * Backport #42951: nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVolumes) * Backport #43022: nautilus: ceph-volume: minor clean-up of `simple scan` subcommand help * Backport #43024: nautilus: mgr/volumes: improve volume deletion process * Backport #43046: nautilus: mgr: "mds metadata" to setup new DaemonState races with fsmap * Backport #43085: nautilus: pybind / cephfs: remove static typing in LibCephFS.chown * Backport #43117: nautilus: unittest import unittest.mock, doesn't exist in py2 * Backport #43137: nautilus: pybind/mgr/volumes: idle connection drop is not working * Backport #43138: nautilus: mds: reports unrecognized message for mgrclient messages * Backport #43141: nautilus: tools/cephfs: linkages injected by cephfs-data-scan have first == head * Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode * Backport #43159: nautilus: mgr/dashboard: KeyError on dashboard reload * Backport #43201: nautilus: wrongly used a string type as int value for CEPH_VOLUME_SYSTEMD_TRIES and CEPH_VOLUME_SYSTEMD_INTERVAL * Backport #43219: nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.cephfs.test_volumes.TestVolumes) * Backport #43239: nautilus: ok-to-stop incorrect for some ec pgs * Backport #43245: nautilus: osd: increase priority in certain OSD perf counters * Backport #43256: nautilus: monitor config store: Deleting logging config settings does not decrease log level * Backport #43271: nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py * Backport #43275: nautilus: TestMixedType.test_filter_all_data_devs must patch VolumeGroups * Backport #43281: nautilus: ceph-volume does not respect $PATH * Backport #43321: nautilus: ceph-volume lvm batch wrong partitioning with multiple osd per device and db-devices * Backport #43333: nautilus: mgr/dashboard: iSCSI targets not available if any iSCSI gateway is down * Backport #43338: nautilus: qa/tasks: add remaining tests for fs volume * Backport #43341: nautilus: add deactivate unit tests * Backport #43343: nautilus: mds: client does not response to cap revoke After session stale->resume circle * Backport #43345: nautilus: mds: metadata changes may be lost when MDS is restarted * Backport #43346: nautilus: short pg log + cache tier ceph_test_rados out of order reply * Backport #43348: nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS)) * Backport #43354: nautilus: mgr/dashboard: Prevent deletion of iSCSI IQNs with open sessions * Backport #43355: nautilus: mgr/dashboard: Javascript error when deleting an iSCSI target * Backport #43462: nautilus: Clarify the message "could not find osd.%s with fsid %s" * Backport #43473: nautilus: recursive lock of OpTracker::lock (70) * Backport #43503: nautilus: mount.ceph: give a hint message when no mds is up or cluster is laggy * Backport #43506: nautilus: MDSMonitor: warn if a new file system is being created with an EC default data pool * Backport #43509: nautilus: 'ceph -s' does not show standbys if there are no filesystems * Backport #43558: nautilus: mds: reject forward scrubs when cluster has multiple active MDS (more than one rank) * Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY * Backport #43570: nautilus: ceph-volume lvm list $device doesn't work if $device is a symlink * Backport #43573: nautilus: cephfs-journal-tool: will crash without any extra argument * Backport #43624: nautilus: mds: note features client has when rejecting client due to feature incompat * Backport #43628: nautilus: client: disallow changing fuse_default_permissions option at runtime * Backport #43629: nautilus: mgr/volumes: provision subvolumes with config metadata storage in cephfs * Backport #43631: nautilus: segv in collect_sys_info * Backport #43650: nautilus: Improve upmap change reporting in logs * Backport #43722: nautilus: common: bufferlist::last_p is not updated by operator=(const bufferlist&) * Backport #43724: nautilus: mgr/volumes: subvolumes with snapshots can be deleted * Backport #43726: nautilus: osd-recovery-space.sh has a race * Backport #43727: nautilus: mgr/pg-autoscaler: Autoscaler creates too many PGs for EC pools * Backport #43729: nautilus: client: chdir does not raise error if a file is passed * Backport #43731: nautilus: mon crash in OSDMap::_pg_to_raw_osds from update_pending_pgs * Backport #43733: nautilus: qa: ffsb suite causes SLOW_OPS warnings * Backport #43770: nautilus: mount.ceph fails with ERANGE if name= option is longer than 37 characters * Backport #43772: nautilus: qa/standalone/misc/ok-to-stop.sh occasionally fails * Backport #43777: nautilus: qa: test_full racy check: AssertionError: 29 not greater than or equal to 30 * Backport #43780: nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.TestCacheDrop) * Backport #43781: nautilus: ceph config show does not display fsid correctly * Backport #43784: nautilus: fs: OpenFileTable object shards have too many k/v pairs * Backport #43790: nautilus: RuntimeError: Files in flight high water is unexpectedly low (0 / 6) * Backport #43792: nautilus: rgw lc Object (current version) transition from Standard storage class to any other makes it non-current. * Backport #43811: nautilus: mgr/dashboard: user with no config-opt permissions getting 403 redirection * Backport #43819: nautilus: mgr: increase default pg num for pools to 32 * Backport #43821: nautilus: nautilus: OSDMonitor: SIGFPE in OSDMonitor::share_map_with_random_osd * Backport #43822: nautilus: Ceph assimilate-conf results in config entries which can not be removed * Backport #43846: nautilus: rgw: unable to abort multipart upload after the bucket got resharded * Backport #43849: nautilus: add sizing arguments to prepare * Backport #43853: nautilus: batch silently changes OSD type in non-interactive mode * Backport #43871: nautilus: batch --buestore regression unable to craete OSDs * Backport #43873: nautilus: mgr/devicehealth: fix telemetry stops sending device reports after 48 hours * Backport #43874: nautilus: rgw: maybe coredump when reload operator happened * Backport #43877: nautilus: rgw: one part of the bulk delete(RGWDeleteMultiObj_ObjStore_S3) fails but no error messages * Backport #43879: nautilus: mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup * Backport #43916: nautilus: mon/PaxosService.cc: 188: FAILED ceph_assert(have_pending) during n->o upgrade * Backport #43922: nautilus: rgw__file: avoid string::front() on empty path * Backport #43924: nautilus: Per-pool pg states for prometheus * Backport #43928: nautilus: mon/Elector.cc: FAILED ceph_assert(m->epoch == get_epoch()) * Backport #43944: nautilus: mgr/dashboard: Unable to remove an iSCSI gateway that is already in use * Backport #43974: nautilus: mgr/telemetry: anonymizing smartctl report itself * Backport #43979: nautilus: "ceph telemetry show" shows error: AttributeError: 'NoneType' object has no attribute 'items' * Backport #43984: nautilus: has_bluestore_label() doesn't work when vg/lv is passed * Backport #43986: nautilus: lvm list for single report is broken when passing vg/lv * Backport #43989: nautilus: osd: Allow 64-char hostname to be added as the "host" in CRUSH * Backport #44000: nautilus: HEALTH_OK is reported with no managers (or OSDs) in the cluster * Backport #44020: pybind/mgr/volumes: restore from snapshot * Backport #44032: nautilus: lvm list alway reports only the first lv in a vg, no matter what was passed. * Backport #44035: nautilus: c-v inventory returns available on ceph LVs * Backport #44047: nautilus: ceph-volume fails when rerunning lvm create on already existing OSDs * Backport #44057: nautilus: telemetry module can crash on entity name with multiple '.' separators * Backport #44082: nautilus: expected MON_CLOCK_SKEW but got none * Backport #44085: nautilus: rebuild-mondb doesn't populate mgr commands -> pg dump EINVAL * Bug #44097: nautilus: "cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" * Bug #44101: nautilus: qa: df pool accounting incomplete * Backport #44109: nautilus: ceph-volume lvm batch raises error on skipped devices * Backport #44112: nautilus: batch: AttributeError: 'Device' object has no attribute 'pvs_api' * Bug #44133: Using VIM in a file system is very slow * Backport #44135: nautilus: zap fails when with multi-device vgs * Backport #44152: nautilus: strategy/filstore.py don't pass journal_size as string * Backport #44153: nautilus: zapping filestore journals broken * Bug #44245: nautilus: mgr: connection halt * Backport #44282: nautilus: mgr/volumes: deadlock when trying to purge large number of trash entries * Backport #44315: nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink * Fix #44376: nautilus: mgr/telemetry: fix UUID and STR concat * Bug #44572: ceph osd status crash * Bug #44939: The mon and/or osd pod memory consumption is not even. One of them consumes about 50% more. * Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to 14.2.8 * Bug #45006: ceph-mgr runs on inactive node * Bug #45099: "s3cmd info s3://bucket" An unexpected error has occurred.