# v12.2.1 * Bug #20594: mds: cache limits should be expressed in memory usage, not inode count * Backport #20964: luminous: [config] switch to new config option getter methods * Backport #20968: luminous: rgw: bytes_send and bytes_recv in the msg of usage show returning is 0 in master branch * Backport #21045: luminous: TestMirroringWatcher.ModeUpdated: periodic failure due to injected message failures * Backport #21046: luminous: "ceph daemon osd.0 get_command_descriptions" needs to have a line break at the end of result * Backport #21052: luminous: RHEL 7.3 Selinux denials at OSD start * Backport #21097: luminous: multisite: FAILED assert(prev_iter != pos_to_prev.end()) in RGWMetaSyncShardCR::collect_children() * Backport #21103: luminous: client: missing space in some client debug log messages * Backport #21104: luminous: ceph-fuse RPM should require fusermount * Backport #21107: luminous: fs: client/mds has wrong check to clear S_ISGID on chown * Backport #21108: luminous: mon/OSDMonitor: "osd pool application get" support * Backport #21110: luminous: rgw: send data-log list infinitely * Backport #21112: luminous: get_quota_root sends lookupname op for every buffered write * Backport #21114: luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites * Backport #21115: luminous: rgw multisite: objects encrypted with SSE-KMS are stored unencrypted in target zone * Backport #21116: luminous: rgw multisite: cannot sync objects encrypted with SSE-C * Backport #21118: luminous: rgw: need to stream metadata full sync init * Backport #21133: luminous: osd/PrimaryLogPG: sparse read won't trigger repair correctly * Backport #21135: luminous: rgw: bucket index sporadically reshards to 65521 shards * Backport #21137: luminous: mgr: 500 error when attempting to view filesystem data * Backport #21138: luminous: RGW: object copied from remote src acl permission become full-control issue * Backport #21139: luminous: rgw: put lifecycle configuration fails if Prefix is not set * Backport #21182: luminous: 'osd crush rule rename' not idempotent * Backport #21183: luminous: Crash in MonCommandCompletion * Backport #21184: luminous: NameError: global name 'name' is not defined * Backport #21185: luminous: rgw_file: incorrect lane lock behavior in evict_block() * Backport #21187: luminous: fix performance regression * Backport #21188: luminous: dashboard: usage graph is getting more and mor big * Bug #21222: MDS: standby-replay mds should avoid initiating subtree export * Bug #21230: the standbys are not updated via "ceph tell mds.* command" * Backport #21231: luminous: core: interval_set: optimize intersect_of insert operations * Backport #21233: luminous: memory leak in MetadataHandlers * Backport #21234: luminous: bluestore: asyn cdeferred_try_submit deadlock * Backport #21235: luminous: thrashosds read error injection doesn't take live_osds into account * Backport #21236: luminous: build_initial_pg_history doesn't update up/acting/etc * Backport #21237: luminous: osd crash when change option "bluestore_csum_type" from "none" to "CRC32" * Backport #21238: luminous: test_health_warnings.sh can fail * Backport #21240: luminous: "Health check update" log spam * Backport #21241: luminous: usage of --inconsistent-index should require user confirmation and print a warning * Backport #21242: luminous: OSD crash: PrimaryLogPG.cc: 8396: FAILED assert(repop_queue.front() == repop) * Bug #21243: incorrect erasure-code space in command ceph df * Bug #21252: mds: asok command error merged with partial Formatter output * Backport #21265: luminous: [cli] rename of non-existent image results in seg fault * Backport #21267: luminous: Incorrect grammar in FS message "1 filesystem is have a failed mds daemon" * Backport #21269: luminous: some generic options can not be passed by rbd-nbd * Backport #21270: luminous: rgw: shadow objects are sometimes not removed * Backport #21276: luminous: os/bluestore/BlueFS.cc: 1255: FAILED assert(!log_file->fnode.extents.empty()) * Backport #21277: luminous: [cls] metadata_list API function does not honor `max_return` parameter. * Backport #21278: luminous: the standbys are not updated via "ceph tell mds.* command" * Backport #21283: luminous: spurious MON_DOWN, apparently slow/laggy mon * Backport #21288: luminous: [test] various teuthology errors * Backport #21289: luminous: [rbd] image-meta list does not return all entries * Feature #21301: expose --sync-stats via admin api * Bug #21303: rocksdb get a error: "Compaction error: Corruption: block checksum mismatch" * Feature #21315: Add option to view IP addresses of clients in output of 'ceph features' * Backport #21321: luminous: mds: asok command error merged with partial Formatter output * Backport #21322: luminous: MDS: standby-replay mds should avoid initiating subtree export * Backport #21323: luminous: MDCache::try_subtree_merge() may print N^2 lines of debug message * Backport #21325: luminous: bluestore: aio submission deadlock * Bug #21328: Performance: Slow OSD startup, heavy LevelDB activity * Backport #21341: luminous: mon/OSDMonitor: deleting pool while pgs are being created leads to assert(p != pools.end) in update_creating_pgs() * Backport #21342: luminous: ceph mgr versions shows active mgr as "Unknown" * Backport #21345: luminous: "'rbd/import_export.sh'" broken in upgrade:luminous-x:parallel-master * Backport #21350: luminous: rgw: data encryption sometimes fails to follow AWS settings * Backport #21357: luminous: mds: segfault during `rm -rf` of large directory * Backport #21374: luminous: incorrect erasure-code space in command ceph df * Bug #21378: mds: up:stopping MDS cannot export directories * Backport #21384: luminous: mds: cache limits should be expressed in memory usage, not inode count * Backport #21385: luminous: mds: up:stopping MDS cannot export directories * Backport #21396: Illegal instruction in RocksDB * Backport #21436: luminous: client: Variable "onsafe" going out of scope leaks the storage it points to * Backport #21437: luminous: test_filtered_df: assert 0.9 < ratio < 1.1 * Backport #21449: luminous: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_ENABLED * Backport #21464: luminous: qa: ignorable MDS_READ_ONLY warning * Backport #21472: luminous: qa: ignorable "MDS cache too large" warning * Backport #21473: luminous: test hang after mds evicts kclient * Backport #21484: luminous: qa: fs.get_config on stopped MDS * Backport #21486: luminous: qa: test_client_pin times out waiting for dentry release from kernel * Backport #21487: luminous: MDS rank add/remove log messages say wrong number of ranks * Backport #21488: luminous: qa: failures from pjd fstest * Backport #21490: luminous: test_rebuild_simple_altpool triggers MDS assertion * Backport #21513: luminous: mds: src/mds/MDLog.cc: 276: FAILED assert(!capped) * Backport #21515: luminous: qa: kcephfs: missing whitelist for evicted client * Backport #21516: luminous: qa: kcephfs: ignore warning on expected mds failover * Backport #21517: luminous: qa: kcephfs: client-limits: whitelist "MDS cache too large" * Backport #21540: luminous: whitelist additions * Bug #21588: upgrade from jewel to luminous ubuntu need firewall restart * Bug #21619: RGW Reshard error add failed to drop lock on * Bug #21734: mount client shows total capacity of cluster but not of a pool * Bug #21736: Cannot create bluestore OSD * Bug #21751: OSD crashes during recovery on arm64 due to assert in Throttle::put * Bug #21770: ceph mon core dump when use ceph osd perf cmd. * Bug #21809: Raw Used space is 70x higher than actually used space (maybe orphaned objects from pool deletion) * Bug #21820: Ceph OSD crash with Segfault * Bug #21827: OSD crashed while reparing inconsistent PG * Bug #22005: result -108 xferred 2000, blk_update_request: I/O error * Bug #22015: Civetweb reports bad response code. * Bug #22055: ghost rbd snapshot * Bug #22066: bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 882: FAILED assert(_buffers.size() <= 1024) * Bug #22088: Docs: describe missing steps in Luminous upgrade docs * Bug #22094: Lots of reads on default.rgw.usage pool * Bug #22102: BlueStore crashed on rocksdb checksum mismatch * Bug #22122: rgw: bucket index object not deleted after radosgw-admin bucket rm --purge-objects --bypass-gc * Bug #22125: systemctl commands use literal '*' * Bug #22149: Can not set custom port for prometheus exporter * Feature #22168: The RGW Admin OPS is missing the ability to filter for e.g. buckets and users * Bug #22218: multisite: rgw crashed during meta sync * Support #22224: memory leak * Support #22243: Luminous: EC pool using more space than it should * Bug #22271: vdbench's IO drop to 0 when resize the image at the same time * Bug #22274: Luminous's radosgw cant decide if it's 32 or 64bit * Bug #22303: mon crash: Failed to load mgr commands * Bug #22306: Python RBD metadata_get does not work. * Bug #22343: Using restful module of Ceph Manager to retrieve Cluster status returns invalid XML