# v14.2.17 * Backport #46014: nautilus: log: the time precision of log is only milliseconds because the option log_coarse_timestamps doesn’t work well * Backport #46194: nautilus: BlueFS replay log grows without end * Backport #46611: nautilus: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in LibCephFS.__cinit__ * Backport #47198: nautilus: mgr/dashboard: Datatable catches select events from other datatables * Backport #47620: nautilus: mgr/dashboard: some nfs-ganesha endpoints are not in correct security scope * Backport #47669: nautilus: Some structs aren't bound to mempools properly * Backport #47672: nautilus: Hybrid allocator might cause duplicate admin socket command registration. * Backport #47803: nautilus: test/librados: endian bugs with checksum test cases * Backport #47823: nautilus: pybind/mgr/volumes: Make number of cloner threads configurable * Backport #47846: nautilus: add no-systemd argument to zap * Backport #47878: nautilus: build-integration-branch merges newer PRs first * Backport #47899: nautilus: mon stat prints plain text with -f json * Backport #47933: nautilus: tools/rados: `rados ls` with json output can result in out of memory error * Backport #47935: nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_session(Session*)' * Backport #47939: nautilus: mon/MDSMonitor: divide mds identifier and mds real name with dot * Backport #47953: nautilus: vstart.sh: failed to run with multi active mds, when setting max_mds. * Backport #47957: nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cannot enter the "failed" state * Backport #47988: nautilus: cephfs client and nfs-ganesha have inconsistent reference count after release cache * Backport #47990: nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" * Backport #47995: nautilus: monitoring: Use null yaxes min for OSD read latency * Backport #48040: nautilus: librbd qos assert m_io_throttled failed * Backport #48083: nautilus: rbd-nbd: the asokpath and log file are using the parent pid, which has exited * Backport #48087: nautilus: ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD * Backport #48093: nautilus: Hybrid allocator might segfault when fallback allocator is present * Backport #48095: nautilus: mds: fix file recovery crash after replaying delayed requests * Backport #48097: nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty()) * Backport #48100: nautilus: Admin API returns 200 instead of 404 for Get Bucket Info * Backport #48110: nautilus: client: ::_read fails to advance pos at EOF checking * Backport #48128: nautilus: Unnecessary bilogs are left in sync-disabled buckets * Backport #48130: nautilus: some clients may return failure in the scenario where multiple clients create directories at the same time * Backport #48133: nautilus: mgr/dashboard: user can change the cluster of a NFS-Ganesha export * Backport #48180: nautilus: mgr/dashboard: Display users current bucket quota usage * Backport #48187: nautilus: the --log-level flag is not respected * Backport #48189: nautilus: remove mention of dmcache from docs and help text * Backport #48192: nautilus: mds: throttle workloads which acquire caps faster than the client can release * Backport #48193: nautilus: bufferlist c_str() sometimes clears assignment to mempool * Backport #48195: nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvolume groups * Backport #48224: nautilus: [librbd] removing pool config overrides does not cause config refresh * Backport #48227: nautilus: Log "ceph health detail" periodically in cluster log * Backport #48238: nautilus: list object versions returned multiple 'IsLatest true' entries * Backport #48282: nautilus: osd: fix bluestore bitmap allocator * Backport #48284: nautilus: /etc/sudoers.d/ceph-osd-smartctl file permissions don't conform to standards * Backport #48286: nautilus: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive client * Backport #48342: nautilus: [rbd_support] Attempting to background remove in-use image results in apparent stuck progress * Backport #48344: nautilus: Unable to disable SSO * Backport #48346: nautilus: rgw: unnecessary payload is added at the end of the message * Backport #48352: nautilus: Fails to deploy osd in rook, throws index error * Backport #48371: nautilus: mds: dir->mark_new should together with dir->mark_dirty * Backport #48374: nautilus: client: dump which fs is used by client for multiple-fs * Backport #48376: nautilus: libcephfs allows calling ftruncate on a file open read-only * Backport #48379: nautilus: invalid values of crush-failure-domain should not be allowed while creating erasure-coded profile * Backport #48395: nautilus: mgr/dashboard: Disable TLS 1.0 and 1.1 * Backport #48400: nautilus: mgr: don't update osd stat which is already out * Backport #48413: nautilus: lvm/create.py: typo in the help message * Backport #48426: nautilus: Put policy should return 204 instead of 200 * Backport #48428: nautilus: rgw: expiration is triggered in advance because of an overflow problem * Backport #48444: nautilus: octopus: setting noscrub crashed osd process * Backport #48457: nautilus: client: fix crash when doing remount in none fuse case * Backport #48479: nautilus: bluefs _allocate failed to allocate bdev 1 and 2,cause ceph_assert(r == 0) * Backport #48482: nautilus: PG::_delete_some isn't optimal iterating objects * Backport #48495: nautilus: Paxos::restart() and Paxos::shutdown() can race leading to use-after-free on 'logger' object. Seen in Nautilus. * Backport #48512: nautilus: AttributeError: module 'lib' has no attribute 'Cryptography_HAS_TLSEXT_HOSTNAME' * Backport #48516: nautilus: mgr/dashboard: SSL Handshake: Update the inbuilt ssl providers error * Backport #48518: nautilus: pybind: test_readlink() fails due to missing terminating NULL char * Backport #48520: nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs * Backport #48537: nautilus: mgr/dashboard: test_standby* (tasks.mgr.test_dashboard.TestDashboard) failed locally * Backport #48543: nautilus: rgw_file: common_prefixes returned out of lexical order * Backport #48558: nautilus: mgr/restful: _gather_osds() mistakenly treats a `str` as a `dict` * Backport #48575: nautilus: Module 'crash' has failed: dictionary changed size during iteration * Backport #48576: nautilus: RGW prefetches data for range requests * Backport #48588: nautilus: mgr/dashboard: RGW User Form is validating disabled fields * Backport #48593: nautilus: mgr/dashboard: Drop invalid RGW client instances, improve logging * Backport #48595: nautilus: nautilus: qa/standalone/scrub/osd-scrub-test.sh: _scrub_abort: return 1 * Backport #48608: nautilus: mgr/dashboard: enable different URL for users of browser to Grafana * Backport #48614: nautilus: Audit log: mgr module passwords set on CLI written as plaintext in log files * Backport #48628: nautilus: mgr/dashboard: The /rgw/status endpoint does not check for running service * Backport #48634: nautilus: qa: tox failures * Backport #48641: nautilus: Client: the directory's capacity will not be updated after write data into the directory * Backport #48643: nautilus: client: ceph.dir.entries does not acquire necessary caps * Backport #48653: nautilus: mgr/dashboard: Display a warning message in Dashboard when debug mode is enabled * Backport #48655: nautilus: mgr/dashboard: CLI commands: read passwords from file * Backport #48675: nautilus: update krbd_stable_pages_required.sh to use stable_writes queue attribute * Backport #48691: nautilus: librbd::image::CreateRequest: validate_features: cannot use internally controlled features * Backport #48724: nautilus: radosgw-admin bucket limit check percentage warnings don't work * Backport #48733: nautilus: mgr/dashboard: minimize Back-end API Test console output/log traces * Backport #48738: nautilus: CVE-2020-27839: mgr/dashboard: The ceph dashboard is vulnerable to XSS attacks * Backport #48744: nautilus: S3 error: 404 (NoSuchBucket) due to distribute cache is not being invoked * Backport #48768: nautilus: mgr/PyModule.cc: set_config with an empty value doesn't remove the config option * Backport #48803: nautilus: Infinite loop in old reset-stats * Backport #48814: nautilus: mds: spurious wakeups in cache upkeep * Backport #48837: nautilus: have mount helper pick appropriate mon sockets for ms_mode value * Backport #48857: nautilus: RGW:Multisite: Verify if the synced object is identical to source * Backport #48859: nautilus: pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot listing * Backport #48879: nautilus: mds: fix recall defaults based on feedback from production clusters * Backport #48887: nautilus: master FTBFS with glibc 2.32 * Bug #48892: One object with multi current version * Backport #48897: nautilus: Mgr deadlock occurs in the process of cluster expansion and reduction * Backport #48901: nautilus: mgr/volumes: get the list of auth IDs that have been granted access to a subvolume using mgr/volumes CLI * Backport #48903: nautilus: Ceph-osd refuses to bind on an IP on the local loopback lo * Backport #48927: nautilus: mgr/dashboard: can't log in when using the development server * Backport #48957: nautilus: Logic error in default prom alert 'pool filling up' * Backport #48961: nautilus: mgr/dashboard: incorrect validation in rgw user form for tenanted users * Backport #48969: nautilus: ocf:ceph:rbd resource agent does not support namespaces * Backport #48987: nautilus: ceph osd df tree reporting incorrect SIZE value for rack having an empty host node * Backport #49002: nautilus: in-tree cram tarball broke down after 10 years of distinguished service * Backport #49005: nautilus: mgr should update mon metadata when mon map is updated * Backport #49012: nautilus: krbd: add support for msgr2 (kernel 5.11) * Backport #49023: nautilus: mgr/dashboard: trigger alert if some nodes have a MTU different than the majority of them * Backport #49026: nautilus: build failure on fedora-34/rawhide with boost 1.75 * Backport #49028: nautilus: mgr/volumes: evict clients based on auth ID and subvolume mounted * Backport #49036: nautilus: ceph: reexpand the config meta just after the fork() is done * Backport #49045: nautilus: tasks.rgw_multi.tests.test_multipart_object_sync fails * Backport #49055: nautilus: pick_a_shard() always select shard 0 * Bug #49062: nautilus mgr/dashboard: fix "ceph dashboard iscsi-gateway-add" * Backport #49067: nautilus: install-deps.sh,deb,rpm: move python-saml deps into debian/control an… * Backport #49094: nautilus: Ceph-volume lvm batch fails with AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv' * Backport #49130: nautilus: multipart object names may have null characters * Backport #49140: nautilus: fail to create OSDs because the requested extent is too large * Backport #49160: nautilus: qa: :ERROR: test_idempotency * Backport #49182: nautilus: [test] ceph_test_rbd_mirror_random_write is non-functional * Bug #49189: rgw_file: RGWLibFS::read success executed, but nodata readed * Backport #49202: nautilus: Since the local loopback address is set to a virtual IP,OSD can't restart . * Backport #49248: nautilus: mgr/dashboard: customize CherryPy Server Header * Backport #49267: nautilus: qa::ERROR: test_recover_auth_metadata_during_authorize * Backport #49271: nautilus: mgr/dashboard: delete EOF when reading passwords from file * Bug #49278: nautilus: mgr/dashboard: python 2: error when setting user's non-ASCII password * Backport #49290: nautilus: fix typo in batch log message * Backport #49314: nautilus: monitoring: add some leeway for package drops and errors (1%) * Backport #49323: nautilus: mgr/dashboard: fix MTU Mismatch alert * Backport #49327: nautilus: mgr/dashboard: avoid using document.write() * Backport #49382: nautilus: multisite: etag verifier misidentifies multipart uploads with only one part * Backport #49389: nautilus: mgr/dashboard: the tooltips for Provisioned/Total Provisioned fields of an RBD image are invisible * Backport #49420: nautilus: mgr/dashboard: set XFrame options and Content Security Policy headers * Backport #49430: nautilus: mgr/volumes: Bump up the AuthMetadataManager's version to 6 * Backport #49440: nautilus: radosgw-admin user create error message is confusing if user with supplied email address already exists * Backport #49447: nautilus: pybind/ceph_volume_client: volume authorize/deauthorize crashes with 'volume' key not found * Backport #49480: nautilus: Bluefs improperly handles huge (>4GB) writes which causes data corruption * Backport #49540: nautilus: mgr/prometheus: add metric for SLOW_OPS healthcheck * Backport #49596: nautilus: mgr/dashboard: ERROR: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest) * Feature #49953: cephfs-top : allow configurable stats refresh interval * Bug #50031: osdc _throttle_op function param type of op_budget int is too small