# v14.2.10 * Backport #39230: nautilus: mgr/dashboard: Update existing E2E tests to match new format * Backport #41122: nautilus: rgw: GET/HEAD and PUT operations on buckets w/lifecycle expiration configured do not return x-amz-expiration header * Backport #41508: nautilus: add information about active scrubs to "ceph -s" (and elsewhere) * Backport #42151: nautilus: mgr/dashboard: Improve workaround to redraw datatables * Backport #42164: nautilus: mgr/dashboard: REST API: OpenAPI docs require internet connection * Backport #42168: nautilus: readable.sh test fails * Backport #42325: nautilus: missing lock release in DaemonServer::handle_report() * Backport #42441: nautilus: mds: create a configurable snapshot limit * Backport #42569: nautilus: mgr/dashboard: Missing service metadata is not handled correctly * Bug #42651: mgr/dashboard: error when editing rbd image whose name contains non-ASCII chars. * Backport #42662: nautilus:Issue a HEALTH_WARN when a Pool is configured with [min_]size == 1 * Backport #42673: nautilus: mgr/dashboard: searching table with data in Object types make Dashboard unresponsive * Backport #42713: nautilus: mgr: daemon state for mds not available * Backport #42856: nautilus: test: LibCephFS.ShutdownRace segfaults (msgr v2 related part) * Backport #43087: nautilus: bluefs: sync_metadata leaks dirty files if log_t is empty * Backport #43098: nautilus: mgr/dashboard: MDS counter chart: We should display the total number of request in the last seconds * Backport #43134: nautilus: multisite: failed assert(cursor) in mdlog trimming * Backport #43156: nautilus: kafka thread is spinning at 100% when there is no work * Backport #43258: nautilus: mgr/dashboard: Language selection issues on Firefox * Backport #43464: nautilus: mgr: restful socket was not closed properly. * Backport #43469: nautilus: asynchronous recovery + backfill might spin pg undersized for a long time * Backport #43659: nautilus: metadata is missing in bucket deletion notifications * Backport #43773: nautilus: qa/tasks/mon_thrash: hide traceback from mon scrub failures * Backport #43775: nautilus: msg/async: local_connection is marked down after draining the stack. * Backport #43820: nautilus: mgr default value handling is broken * Backport #43832: nautilus: CephxSessionHandler::_calc_signature segv * Backport #43848: nautilus: upload part copy range able to get almost any string * Backport #43851: nautilus: Dynamic resharding not working for empty zonegroup in period * Backport #43852: nautilus: osd-scrub-snaps.sh fails * Backport #43855: nautilus: rgw: SignatureDoesNotMatch when s3 client use ipv6 address * Backport #43878: nautilus: rgw: when you abort a multipart upload request, the quota may be not updated * Backport #43919: nautilus: osd stuck down * Backport #43920: nautilus: common/bl: claim_append() corrupts memory when a bl consecutively has at least two unshareable bptrs * Backport #43923: nautilus: multisite: incremental data sync does not enforce spawn window * Backport #43990: nautilus: FAIL: test_health_history (tasks.mgr.test_insights.TestInsights) * Backport #43995: nautilus: ceph orchestrator rgw rm: no valid command found * Backport #43997: nautilus: Ceph tools utilizing "global_[pre_]init" no longer process "early" environment options * Backport #43998: nautilus: mgr/dashboard: Manager modules is showing textboxes for boolean values * Backport #43999: nautilus: multi-part upload will lost data * Backport #44037: nautilus: when performing multiple object deletion notifications are not sent * Backport #44038: nautilus: fix rgw crash when duration is invalid in sts request * Backport #44046: nautilus: telemetry: crash when posting * Backport #44060: nautilus: pg_autoscaler: treat target ratios as weights * Backport #44080: nautilus: [test] fixed CEPH_ARGS processing is causing test failures * Backport #44081: nautilus: ceph -s does not show >32bit pg states * Backport #44095: nautilus: mgr/dashboard: use booleanText pipe for RGW user 'system' info * Backport #44129: nautilus: Beast frontend option to configure the maximum number of connections * Backport #44136: nautilus: set bucket attr twice when delete lifecycle config * Backport #44141: nautilus: failed to set DurationSeconds in sts request * Backport #44143: nautilus: rgw: ordered listing of bucket with many incomplete multipart uploads fails * Backport #44145: nautilus: rgw: failed to set correct storage class for append upload * Backport #44146: nautilus: assign bucket policy to subuser * Backport #44163: Bump fio version for glibc-2.30 compilation * Backport #44174: nautilus: mgr/dashboard: rgw user details > field "System" always "Yes" * Backport #44199: nautilus: mgr/telemetry: add 'last_upload' to status * Backport #44200: nautilus: mgr: Add get_rates_from_data from the dashboard to the mgr_util.py * Backport #44203: nautilus: mgr/dashboard: RGW port autodetection does not support "Beast" RGW frontend * Backport #44206: nautilus: osd segv in ceph::buffer::v14_2_0::ptr::release (PGTempMap::decode) * Backport #44218: nautilus: Devicehealth scrape fails when smartctl return code is non-zero * Backport #44219: nautilus: Module 'pg_autoscaler' has failed: division by zero * Backport #44226: nautilus: c-v raw list should silence stderr * Backport #44232: nautilus: rgw: ReplaceKeyPrefixWith and ReplaceKeyWith can not set at the same time and support some HttpErrorCodeReturnedEquals and HttpRedirectCode limit. * Backport #44259: nautilus: Slow Requests/OP's types not getting logged * Backport #44260: nautilus: selinux setsched denials for 'fn_anonymous' * Backport #44263: nautilus: [rbd-mirror] Mirror daemon never recovers from being blacklisted * Backport #44267: nautilus: rgw: markers can lose namespaces during ordered and unordered bucket listings * Backport #44289: nautilus: mon: update + monmap update triggers spawn loop * Backport #44291: nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed * Backport #44324: nautilus: Receiving RemoteBackfillReserved in WaitLocalBackfillReserved can cause the osd to crash * Backport #44325: nautilus: ceph-volum lvm get_device_vgs() doesn't filter by prefix * Backport #44327: nautilus: mgr/dashboard: 'destroyed' view in CRUSH map viewer * Backport #44328: nautilus: client: bad error handling in Client::_lseek * Backport #44330: nautilus: qa: multimds suite using centos7 * Backport #44331: nautilus: simple scan on dmcrypt OSDs creates wrong keys in json file * Backport #44334: nautilus: mgr/dashboard: 'Last Change' column heading * Backport #44337: nautilus: mds: purge queue corruption from wrong backport * Backport #44360: nautilus: Rados should use the '-o outfile' convention * Backport #44364: nautilus: mgr/telemetry: fix and document proxy usage * Backport #44367: nautilus: telemtry on requires undocumented license argument * Backport #44370: nautilus: msg/async: the event center is blocked by rdma construct conection for transport ib sync msg * Backport #44372: nautilus: mgr/dashboard: fix rbd image 'purge trash' button & modal text * Backport #44375: nautilus: mgr/dashboard: read-only user can display RGW API keys * Backport #44378: nautilus: mgr/dashboard: 404 on on dashboard home when built for RPM * Backport #44413: nautilus: FTBFS on s390x in openSUSE Build Service due to presence of -O2 in RPM_OPT_FLAGS * Backport #44434: nautilus: common: vstart.sh: set prometheus port for each mgr. * Backport #44435: nautilus: mgr/dashboard: security: some system roles allow accessing sensitive information * Backport #44444: nautilus: rgw (luminous) making implicit_tenants backwards compatible. * Backport #44464: nautilus: mon: fix/improve mon sync over small keys * Backport #44466: nautilus: sse-kms: requests to barbican don't set correct http status * Backport #44468: nautilus: mon: Get session_map_lock before remove_session * Backport #44472: nautilus: ent_list not cleared inside each loop of bucket list * Backport #44473: nautilus: pybind/mgr/volumes: add `mypy` support * Backport #44475: nautilus: mgr/dashboard: Not able to restrict bucket creation for new user * Backport #44478: nautilus: mds: assert(p != active_requests.end()) * Backport #44480: nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2) * Backport #44482: nautilus: Fix bug on subuser policy identity checker * Backport #44483: nautilus: mds: assertion failure due to blacklist * Backport #44484: nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode timestamps for cloned subvolumes * Backport #44486: nautilus: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan * Backport #44490: nautilus: lz4 compressor corrupts data when buffers are unaligned * Backport #44516: nautilus: segv in MonClient::handle_auth_done * Backport #44520: nautilus: qa: test_scrub_abort fails during check_task_status("idle") * Backport #44521: nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes.TestVolumes) * Backport #44524: nautilus: osd status reports old crush location after osd moves * Backport #44549: nautilus: monitoring: fix RGW grafana chart 'Average GET/PUT Latencies' * Backport #44574: nautilus: mgr/dashboard: Dashboard does not allow you to set norebalance OSD flag * Backport #44614: nautilus: notification: topic action fail with "MethodNotAllowed" * Backport #44639: nautilus: mgr/dashboard: Pool read/write OPS shows too many decimal places * Backport #44648: nautilus: [test] NBD workunit does not wait for unmap disconnect delay * Backport #44651: nautilus: devstack-tempest-gate.yaml fails * Backport #44653: nautilus: Log Level too High during Realm Pull * Backport #44655: nautilus: qa: SyntaxError: invalid token * Backport #44668: nautilus: mgr/dashboard: backend API test failure "test_access_permissions" * Backport #44670: mgr/volumes: support canceling in-progress/pending clone operations. * Backport #44674: nautilus: mgr/balancer: KeyError messages in balancer module * Backport #44686: nautilus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean timeout * Backport #44688: nautilus: prepare: the *-slots arguments have no effect * Backport #44689: nautilus: osd/osd-scrub-repair.sh fails: scrub/osd-scrub-repair.sh:698: TEST_repair_stats_ec: test 11 = 13 * Backport #44690: nautilus: mgr/dashboard: list configured prometheus alerts * Backport #44711: nautilus: pgs entering premerge state that still need backfill * Backport #44735: nautilus: prometheus metrics wrongly reports scrubbing pgs * Backport #44818: nautilus: perf regression due to bluefs_buffered_io=true * Backport #44838: nautilus: [python] ensure image is open before permitting operations * Backport #44841: nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.version.version > head.version) in PGLog::IndexedLog::add() * Backport #44846: nautilus: install-deps.sh tries to use yum and install yum-utils on centos/rhel 8 * Backport #44868: nautilus: mgr: prometheus Segmentation fault * Backport #44896: nautilus: librbd:No lockers are obtained, ImageNotFound exception will be output. * Backport #44899: nautilus: Improve internal python to c++ interface * Backport #44900: nautilus: mgr progress module: check of pg_ready key but isn't part of pg_dump interface * Backport #44917: nautilus: monitoring: alert for prediction of disk and pool fill up broken * Backport #44920: nautilus: mgr/dashboard: Add more debug information to Dashboard RGW backend * Backport #44952: nautilus: mgr/dashboard: Some Grafana panels in Host overview, Host details, OSD details etc. are displaying N/A or no data * Backport #44954: nautilus: monitoring: root volume full alert fires false positives * Backport #44974: nautilus: simple/scan.py: syntax problem in log statement * Backport #44980: nautilus: monitoring: Fix pool capacity incorrect * Backport #44995: nautilus: mgr/dashboard: Unit test is failing because of the timezone * Backport #44997: nautilus: mgr/dashboard: define SSO/SAML dependencies to packaging * Backport #44998: nautilus: batch: error on filtered devices in interactive only if usable data devices are present * Backport #45002: nautilus: batch filter_devices tries to access lvs when there are none * Backport #45019: nautilus: mgr/dashboard: standby mgr redirects to a IP address instead of a FQDN URL * Backport #45027: nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0) * Backport #45035: nautilus: RPM 4.15.1 has some issues with ceph.spec * Backport #45040: nautilus: mon: reset min_size when changing pool size * Backport #45043: nautilus: mgr: exception in module serve thread does not log traceback * Backport #45045: nautilus: ceph-bluestore-tool --command bluefs-bdev-new-wal may damage bluefs * Backport #45050: nautilus: stale scrub status entry from a failed mds shows up in `ceph status` * Backport #45054: nautilus: nautilus upgrade should recommend ceph-osd restarts after enabling msgr2 * Backport #45056: nautilus: multisite checkpoint failures in three-zone-plus-pubsub.yaml * Backport #45060: nautilus: qa/workunits/rest/test-restful.sh fails * Backport #45064: nautilus: bluestore: unused calculation is broken * Backport #45070: nautilus: Trying to enable the CEPH Telegraf module errors 'No such file or directory' * Backport #45073: nautilus: SElinux denials observed on teuthology multisite run * Backport #45082: nautilus: mgr/dashboard: iSCSI CHAP max length validation * Backport #45085: nautilus: mgr/dashboard: Editing iSCSI target advanced setting causes a target recreation * Backport #45123: nautilus: OSD might fail to recover after ENOSPC crash * Backport #45126: nautilus: Extent leak after main device expand * Backport #45157: nautilus: mgr/dashboard: Refactor Python unittests and controller * Backport #45181: nautilus: pybind/mgr/volumes: add command to return metadata regarding a subvolume * Backport #45208: nautilus: monitoring: alert for pool fill up broken * Backport #45210: nautilus: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0 * Backport #45212: nautilus: client: write stuck at waiting for larger max_size * Backport #45217: nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull) * Backport #45221: nautilus: cephfs-journal-tool: cannot set --dry_run arg * Backport #45224: nautilus: LibRadosWatchNotify.WatchNotify failure * Backport #45225: nautilus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not give the necessary right anymore * Backport #45229: nautilus: ceph fs add_data_pool doesn't set pool metadata properly * Backport #45231: nautilus: pg_autoscaler throws HEALTH_WARN with auto_scale on for all pools * Backport #45260: nautilus: rgw: dynamic resharding may process a manually resharded bucket with lower count * Backport #45273: nautilus: mgr/dashboard: ceph-api-nightly-master-backend and ceph-api-nightly-octopus-backend RuntimeError "test_purge_trash (tasks.mgr.dashboard.test_rbd.RbdTest)" * Backport #45287: nautilus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_do_remount * Backport #45316: nautilus: Add support for --bucket-id in radosgw-admin bucket stats command * Backport #45323: nautilus: mgr/dashboard: monitoring menu entry should indicate firing alerts * Backport #45329: nautilus: monitoring: fix grafana percentage precision * Backport #45330: nautilus: check-generated.sh finds error in ceph-dencoder * Backport #45359: nautilus: rados: Sharded OpWQ drops suicide_grace after waiting for work * Backport #45361: nautilus: rgw_bucket_parse_bucket_key function is holding old tenant value, when this function is executed in a loop * Backport #45365: nautilus: qa: rbd-nbd unmap_device may exit earlier due to incorrect list-mapped filter * Backport #45391: nautilus: follower monitors can grow beyond memory target * Backport #45402: nautilus: mon/OSDMonitor: maps not trimmed if osds are down * Backport #45436: nautilus: rgw:dmclock: When test rgw dmclock function, it cannot work well. * Documentation #45447: mgr/dashboard: document tested and as minimum recommended versions of applications in the monitoring stack * Backport #45469: nautilus: mgr/dashboard: monitoring: Fix "10% OSDs down" alert description * Backport #45474: nautilus: some obsolete "ceph mds" sub commands are suggested by bash completion * Backport #45478: nautilus: fix MClientCaps::FLAG_SYNC in check_caps * Backport #45483: nautilus: Add support DG_AFFINITY env var parsing. * Backport #45486: nautilus: infinite loop in 'radosgw-admin datalog list' * Backport #45496: nautilus: client: fuse mount will print call trace with incorrect options * Backport #45497: nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_verify_scatter) * Backport #45499: nautilus: rgw: some list buckets handle leak * Backport #45501: nautilus: RGW check object exists before auth? * Backport #45502: nautilus: rgw lc does not delete objects that do not have the same tags * Bug #45515: mgr/dashboard:ceph-api-nautilus-backend "ImportError: Failed to import _strptime because the import lockis held by another thread." * Backport #45517: nautilus: when "Uploads a part in a multipart upload", if the specified multipart upload does not exist, it should response "NoSuchUpload" * Backport #45540: nautilus: mgr/dashboard: HomeTest fails if there is no real dist folder * Backport #45577: nautilus: [librbd] The 'copy' method defaults to the source image format * Backport #45579: nautilus: [python] Image create(...) method defaults to "old_format = True" * Backport #45582: nautilus: Monitoring: Grafana Dashboard per rbd image * Backport #45600: nautilus: mds: inode's xattr_map may reference a large memory. * Backport #45602: nautilus: mds: PurgeQueue does not handle objecter errors * Backport #45637: nautilus: python3: run-backend-api-tests.sh fails * Backport #45642: nautilus: src/test/compressor: Add missing gtest #33731 * Backport #45644: nautilus: rgw/notifications: mission versionId in versioned buckets * Backport #45675: nautilus: qa: TypeError: unsupported operand type(s) for +: 'range' and 'range' * Backport #45679: nautilus: mds: layout parser does not handle [-.] in pool names * Backport #45681: nautilus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume create command * Backport #45686: nautilus: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking * Backport #45689: nautilus: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL * Documentation #45730: MDS config reference lists mds log max expiring * Backport #45780: nautilus: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus) * Backport #45784: nautilus: KeyError: 'ceph.type' * Backport #45804: nautilus: qa: verify sub-suite does not define os_version * Backport #45811: nautilus: keystone [-] Unhandled error: pkg_resources.ContextualVersionConflict: (jsonschema 3.2.0 ... * Backport #45827: nautilus: MDS config reference lists mds log max expiring * Backport #45850: nautilus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces * Bug #45875: mds: add config to require forward to auth MDS * Backport #45967: nautilus: qa: TestExports is failure under new Python3 runtime * Backport #45974: nautilus: qa: AssertionError: '1' != b'1' * Bug #46490: osds crashing during deep-scrub * Bug #46555: Missing dashboard rpms for 14.2.0 and el7 in download.ceph.com * Bug #46626: The bandwidth of bluestore was throttled * Bug #46927: Nautilus: RBD map hangs or fails with kernel: [98051.490691] rbd: rbd_dev_v2_snap_context: rbd_obj_method_sync returned -512 * Bug #47103: Ceph is not going into read only mode when it is 85% full. * Bug #47104: CopyPartRequest in rook ceph is generating huge amount of data similar to its usage. * Support #47150: ceph df - big difference between per-class and per-pool usage * Bug #47208: ceph-osd Failed to create bluestore * Bug #47235: rgw/rgw_file: incorrect lru object eviction in lookup_fh * Bug #47271: ceph version 14.2.10-OSD fails * Bug #47371: librbd qos assert m_io_throttled failed * Bug #47418: Ceph changes user metadata with `_` to `-` * Bug #47527: Ceph returns s3 incompatible xml response for listMultipartUploads * Bug #47590: osd do not respect scrub schedule * Bug #47673: cephfs 4k randwrite + EC pool(2+1) + single node all OSDs OOM