# v12.2.11 * Backport #24358: luminous: SSL support for beast frontend * Backport #24759: luminous: test gets ENOSPC from bluestore block device * Backport #24826: luminous: run-make-check.sh ccache tweaks * Backport #24929: luminous: qa: test_recovery_pool tries asok on wrong node * Backport #25201: luminous: ceph-mgr: Module 'influx' has failed * Backport #26919: luminous: common: (mon) command sanitization accepts floats when Int type is defined resulting in exception fault in ceph-mon * Backport #26943: luminous: os/bluestore/BlueStore.cc: 1025: FAILED assert(buffer_bytes >= b->length) from ObjectStore/StoreTest.ColSplitTest2/2 * Backport #32091: luminous: mds: migrate strays part by part when shutdown mds * Backport #36145: luminous: fsck: cid is improperly matched to oid * Backport #36200: luminous: mds: fix mds damaged due to unexpected journal length * Backport #36206: luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted * Backport #36217: luminous: Some cephfs tool commands silently operate on only rank 0, even if multiple ranks exist * Backport #36222: luminous: rgw: default quota not set in radosgw for Openstack users * Backport #36279: luminous: qa: RuntimeError: FSCID 10 has no rank 1 * Backport #36281: luminous: mds: add drop_cache command * Backport #36309: luminous: doc: Typo error on cephfs/fuse/ * Backport #36312: luminous: doc: fix broken fstab url in cephfs/fuse * Backport #36321: luminous: Add support for osd_delete_sleep configuration value * Backport #36391: luminous: No linker time hardening in ceph rpm builds * Backport #36407: luminous:[pybind/rbd] Flag RBD_FLAG_FAST_DIFF_INVALID is not exposed in Python bindings * Backport #36414: luminous: librgw: crashes in multisite configuration * Backport #36429: luminous: [qa] move OpenStack devstack test to rocky release * Backport #36436: luminous: rados rm --force-full is blocked when cluster is in full status * Backport #36456: luminous: client: explicitly show blacklisted state via asok status command * Backport #36460: luminous: mds: rctime not set on system inode (root) at startup * Backport #36464: luminous: mgr crash on scrub of unconnected osd * Backport #36502: luminous: qa: increase rm timeout for workunit cleanup * Backport #36504: luminous: qa: infinite timeout on asok command causes job to die * Backport #36506: luminous: mon osdmap cash too small during upgrade to mimic * Backport #36554: luminous: [rbd-mirror] periodic mirror status timer might fail to be scheduled * Backport #36556: luminous: RBD client IOPS pool stats are incorrect (2x higher; includes IO hints as an op) * Backport #36568: luminous: [test] workunit teuthology tasks race with "git clone" * Backport #36575: luminous: mgr/status: fix fs status subcommand did not show standby-replay MDS' perf info * Backport #36577: luminous: qa: teuthology may hang on diagnostic commands for fuse mount * Backport #36630: luminous: potential deadlock in PG::_scan_snaps when repairing snap mapper * Backport #36636: luminous: osd: race condition opening heartbeat connection * Backport #36638: luminous: rename does not old ref to replacement onode at old name * Backport #36642: luminous: Internal fragment of ObjectCacher * Backport #36644: luminous: SSE encryption does not detect ssl termination in proxy * Backport #36646: luminous: librados api aio tests race condition * Backport #36657: luminous: Cache-tier forward mode hang in luminous (again) * Backport #36688: luminous: lock in resharding may expires before the dynamic resharding completes * Backport #36691: luminous: client: request next osdmap for blacklisted client * Backport #36695: luminous: mds: cache drop command requires timeout argument when it is supposed to be optional * Backport #36750: luminous: [restful] deep_scrub is not a valid OSD command * Backport #36757: luminous: rgw-admin: reshard add can add a non existant bucket * Backport #37092: luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds thrashing * Backport #37154: luminous: tests: ceph-admin-commands.sh workunit does not log what it's doing * Backport #37272: luminous: ceph-mgr: blocking requests sent to restful api server hangs sometimes * Backport #37274: luminous: debian: packaging need to reflect move of /etc/bash_completion.d/radosgw-admin from radosgw to ceph-common * Backport #37284: luminous: rgw: radosgw-admin: reshard status prints status codes as enum value (e.g., "0" rather than something human-readable) * Backport #37341: luminous: doc: Add bluestore memory autotuning docs * Backport #37343: luminous: Prioritize user specified scrubs * Backport #37349: luminous: when using nfs-ganesha to upload file, rgw es sync module get failed * Backport #37362: luminous: mgr: prometheus: is not possible to determine wal/db devices * Backport #37363: luminous: Resize state macihine missing unblock_writes if shrink is not allowed * Backport #37365: luminous: doc: edit on github * Backport #37383: luminous: test: Start using GNU awk and fix archiving directory * Backport #37397: luminous: "/usr/bin/ld: cannot find -lradospp" in rados mimic * Backport #37413: luminous: mgr/balancer: add crush_compat_metrics param to change optimization keys * Backport #37416: luminous: mgr: various python3 fixes * Backport #37420: luminous: mgr/balancer: add cmd to list all plans * Backport #37423: luminous: qa: wrong setting for msgr failures * Backport #37425: luminous: ceph-volume-client: cannot set mode for cephfs volumes as required by OpenShift * Backport #37427: luminous: msg/async: crashes when authenticator provided by verify_authorizer not implemented * Backport #37429: luminous: common: WeightedPriorityQueue leaks memory * Backport #37438: luminous: crushtool: add --reclassify operation to convert legacy crush maps to use device classes * Backport #37446: luminous: add a command to trim old bucket instances after resharding completes * Backport #37466: luminous: rgw: master zone deletion without a zonegroup rm would break rgw rados init * Backport #37475: luminous: multisite: bilog trimming crashes when pgnls fails with EINVAL * Backport #37478: luminous: src/mgr/DaemonServer.cc: 912: FAILED ceph_assert(daemon_state.exists(key)) * Backport #37482: luminous: Bucket policy and colons in filename * Backport #37495: luminous: bluefs-bdev-expand aborts * Backport #37519: luminous: rgw: fix max-size in radosgw-admin and REST Admin API * Feature #37522: Keystone type user creation * Backport #37535: luminous: rbd_snap_list_end() segfaults if rbd_snap_list() fails * Backport #37537: luminous: Incorrect upmap remove * Bug #37540: luminous: MDSMap session timeout cannot be modified * Backport #37549: luminous: librgw not sync s3 user info since started * Backport #37551: luminous: multisite: sync gets stuck retrying deletes that fail with ERR_PRECONDITION_FAILED * Backport #37553: luminous: linger op get lost during ceph osd pause and ceph osd unpause * Backport #37555: luminous: rgw: resharding leaves old bucket info objs and index shards behind * Backport #37563: luminous: rgw: version bucket stats not correct * Bug #37582: luminous: ceph -s client gets all mgrmaps * Backport #37600: luminous: doc: broken link on troubleshooting-mon page * Backport #37602: luminous: mds: severe internal fragment when decoding xattr_map from log event * Backport #37604: luminous: mds: PurgeQueue write error handler does not handle EBLACKLISTED * Backport #37606: luminous: mds: directories pinned keep being replicated back and forth between exporting mds and importing mds * Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will hang/kill the MDS * Backport #37610: luminous: qa: pjd test appears to require more than 3h timeout for some configurations * Bug #37616: SignatureDoesNotMatch with multipart upload from minio-py * Backport #37623: luminous: qa: client socket inaccessible without sudo * Backport #37625: luminous: fs status command broken in py3-only environments * Backport #37627: luminous: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid item in purge queue * Backport #37629: luminous: mds: do not call Journaler::_trim twice * Backport #37631: luminous: client: do not move f->pos untill success write * Backport #37633: luminous: mds: remove duplicated l_mdc_num_strays perfcounter set * Backport #37635: luminous: race of updating wanted caps * Backport #37643: luminous: ceph-create-keys: fix octal notation for Python 3 without losing compatibility with Python 2 * Bug #37668: AbortMultipartUpload causes data loss(NoSuchKey) when CompleteMultipartUpload request timeout * Backport #37685: luminous: Remove capability reset command * Backport #37694: luminous: CephFS remove snapshot result in slow ops * Backport #37697: luminous: osd_memory_target: failed assert when options mismatch * Backport #37700: luminous: fuse client can't read file due to can't acquire Fr * Backport #37737: luminous: MDSMonitor: ignores stopping MDS that was formerly laggy * Backport #37739: luminous: extend reconnect period when mds is busy * Backport #37743: luminous: Mgr: OSDMap.cc: 4140: FAILED assert(osd_weight.count(i.first)) * Bug #37754: bucket metadata not deleted after placement and bucket deleted * Backport #37758: luminous: standby-replay MDS spews message to log every second * Backport #37762: luminous: mds: deadlock when setting config value via admin socket * Bug #37769: __ceph_remove_cap caused kernel crash * Backport #37806: luminous: OSD logs are not logging slow requests * Backport #37811: luminous: Empty pg_temps are added to incremental map even if there're no changes in new epoch * Backport #37813: luminous: mon: segmentation fault during shutdown * Backport #37820: luminous: mds: create separate config for heartbeat timeout * Backport #37827: luminous: mgr crash when handle_report updating existing DaemonState for rgw * Backport #37829: luminous: ceph-fuse: hang because it miss reconnect phase when hot standby mds switch occurs * Backport #37831: luminous: Configurable ListBucket max-keys limit * Bug #37855: only first subuser can be exported to nfs * Bug #37879: rgw: fix prefix handling in LCFilter * Backport #37899: luminous: mds: purge queue recovery hangs during boot if PQ journal is damaged * Backport #37903: luminous: osd: pg log hard limit can cause crash during upgrade * Support #37918: apt-get upgrade 12.2.8 to 12.2.10 failed * Backport #37922: luminous: qa: test_damage expectations wrong for Truncate on some objects * Backport #37924: luminous: qa: test_damage performs truncate test on same object repeatedly * Bug #37946: ceph-volume simple scan: AttributeError: * Backport #37949: luminous: debug logging for v4 auth does not sanitize encryption keys * Backport #37953: luminous: qa: test_damage needs to silence MDS_READ_ONLY * Backport #37977: luminous: infinite loop in OpTracker::check_ops_in_flight * Backport #37985: luminous: cli: dump osd-fsid as part of osd find * Bug #38005: _scan_snaps no head for * Bug #38119: rgw can't create bucket, because can't find zonegroup? location constraint (default) can't be found. * Bug #38226: rgw: data sync: ERROR: failed to read remote data log info: ret=-2