# v12.2.2 * Feature #3586: CRUSH: separate library * Backport #21299: luminous: [rbd-mirror] asok hook names not updated when image is renamed * Backport #21307: luminous: Client client.admin marked osd.2 out, after it was down for 1504627577 seconds * Backport #21320: luminous: Quieten scary RuntimeError from restful module on startup * Backport #21324: luminous: ceph: tell mds.* results in warning * Bug #21337: luminous: MDS is not getting past up:replay on Luminous cluster * Backport #21343: luminous: DNS SRV default service name not used anymore * Backport #21438: luminous: Daemons(OSD, Mon...) exit abnormally at injectargs command * Backport #21439: luminous: Performance: Slow OSD startup, heavy LevelDB activity * Backport #21441: luminous: [cli] mirror "getter" commands will fail if mirroring has never been enabled * Backport #21443: luminous: Prometheus crash when update * Backport #21444: luminous: rgw: setxattrs call leads to different mtimes for bucket index and object * Backport #21445: luminous: rgw: reversed account listing of Swift API should be supported * Backport #21446: luminous: rgw:multisite: Get bucket location which is located in another zonegroup, will return "301 Moved Permanently" * Backport #21448: luminous: rgw: string_view instance points to expired memory in PrefixableSignatureHelper * Backport #21451: luminous: rgw: lc process only schdule the first item of lc objects * Backport #21452: luminous: prometheus module generates invalid output when counter names contain non-alphanum characters * Backport #21453: luminous: rgw: end_marker parameter doesn't work on Swift container's listing * Backport #21456: luminous: rgw: wrong error message is returned when putting container with a name that is too long * Backport #21457: luminous: rgw: /info lacks swift.max_meta_value_length * Backport #21458: luminous: rgw: /info lacks swift.max_meta_count * Backport #21459: luminous: rgw: wrong error code is returned when putting container metadata with too long name * Backport #21460: luminous: rgw: missing support for per storage-policy usage statistics * Backport #21465: luminous: OSD metadata 'backend_filestore_dev_node' is "unknown" even for simple deployment * Backport #21504: ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service units for device-backed OSDs * Backport #21514: luminous: ceph_volume_client: snapshot dir name hardcoded * Backport #21523: luminous: selinux denies getattr on lnk sysfs files * Backport #21524: luminous: DaemonState members accessed outside of locks * Backport #21543: luminous: bluestore fsck took 224.778802 seconds to complete which caused "timed out waiting for admin_socket to appear after osd.1 restart" * Backport #21544: luminous: mon osd feature checks for osdmap flags and require-osd-release fail if 0 up osds * Backport #21545: luminous: rgw file write error * Backport #21547: luminous: ceph-mgr gets process called "exe" after respawn * Backport #21548: luminous: ceph_manager: bad AssertionError: failed to recover before timeout expired * Backport #21549: luminous: the dashboard uses absolute links for filesystems and clients * Bug #21590: fix a bug about inconsistent unit of comparison * Backport #21600: luminous: mds: client caps can go below hard-coded default (100) * Backport #21602: luminous: ceph_volume_client: add get, put, and delete object interfaces * Backport #21627: luminous: ceph_volume_client: sets invalid caps for existing IDs with no caps * Backport #21633: luminous: s3:GetBucketWebsite/PutBucketWebsite fails with 403 * Backport #21634: luminous: s3:GetBucketLocation bucket policy fails with 403 * Backport #21635: luminous: s3:GetBucketCORS/s3:PutBucketCORS policy fails with 403 * Backport #21637: luminous: encryption: PutObj response does not include sse-kms headers * Backport #21638: luminous: dashboard OSD list has servers and osds in arbitrary order * Backport #21639: luminous: rbd does not delete snaps in (ec) data pool * Backport #21640: luminous: [rbd-mirror] resync isn't properly deleting non-primary image * Backport #21643: luminous: upmap does not respect osd reweights * Backport #21645: luminous: Incomplete/missing get_store_prefixes implementations in OSDMonitor/MDSMonitor * Backport #21647: luminous: "mgr_command_descs" not sync in the new join Mon * Backport #21648: luminous: mgr[zabbix] float division by zero * Backport #21649: luminous: multisite: sync of bucket entrypoints fail with ENOENT * Backport #21650: luminous: buffer_anon leak during deep scrub (on otherwise idle osd) * Backport #21651: luminous: rgw: avoid logging keystone revocation failures when no keystone is configured * Backport #21652: luminous: policy checks missing from Get/SetRequestPayment operations * Backport #21655: luminous: expose --sync-stats via admin api * Backport #21656: luminous: crash on DaemonPerfCounters::update * Backport #21658: luminous: purge queue and standby replay mds * Backport #21659: luminous: Crash in get_metadata_python during MDS restart * Backport #21684: luminous: keystone: Thread::join assert when joining uninitialized revoke thread from TokenCache dtor * Backport #21692: luminous: Kraken client crash after upgrading cluster from Kraken to Luminous * Backport #21693: luminous: interval_map.h: 161: FAILED assert(len > 0) * Backport #21696: luminous: fix a bug about inconsistent unit of comparison * Backport #21698: luminous: radosgw-admin usage show loops indefinitly * Backport #21699: luminous: mgr status module uses base 10 units * Backport #21701: luminous: ceph-kvstore-tool does not call bluestore's umount when exit * Backport #21702: luminous: BlueStore::umount will crash when the BlueStore is opened by start_kv_only() * Backport #21719: luminous: doc fails build with latest breathe * Backport #21729: luminous: ceph-disk: retry on OSError * Backport #21732: luminous: "cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)"" in rest-luminous-distro * Documentation #21733: OSD-Config-ref(osd max object size) section malformed * Bug #21752: ceph.in: ceph fs status returns error * Backport #21755: RADOS: Get "valid pg state" error at ceph pg ls commands * Backport #21783: luminous: cli/crushtools/build.t sometimes fails in jenkins' "make check" run * Backport #21787: luminous: "Error EINVAL: invalid command" in upgrade:jewel-x-luminous-distro-basic-smithi * Backport #21789: luminous: user creation can overwrite existing user even if different uid is given * Backport #21790: luminous: RGW: Multipart upload may double the quota * Backport #21792: luminous: encryption: reject requests that don't provide all expected headers * Backport #21795: luminous: Ubuntu amd64 client can not discover the ubuntu arm64 ceph cluster * Backport #21804: luminous: limit internal memory usage of object cacher. * Backport #21805: luminous: client_metadata can be missing * Backport #21806: luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins() * Backport #21810: luminous: mds: trims all unpinned dentries when memory limit is reached * Backport #21816: luminous: multisite: multipart uploads fail to sync * Backport #21817: zone compression type is not validated * Backport #21854: luminous: rgw_file: explicit NFSv3 open() emulation * Backport #21855: luminous: [object map] removing a large image (~100TB) with an object map may result in loss of OSD * Backport #21856: luminous: rgw: disable dynamic resharding in multisite enviorment * Backport #21857: luminous: rgw: We cant't get torrents if objects are encrypted using SSE-C * Backport #21899: luminous: [upgrade] buffer::list ABI broken in luminous release * Backport #21914: luminous: [rbd-mirror] peer cluster connections should filter out command line optionals * Backport #21917: luminous: some kernels don't understand crush compat weight-set * Backport #21918: luminous: Disable messenger logging (debug ms = 0/0) for clients unless overridden. * Backport #21919: luminous: rgw_file: full enumeration needs new readdir whence strategy, plus recursive rm side-effect of FLAG_EXACT_MATCH found * Backport #21937: luminous: source data in 'default.rgw.buckets.data' may not be deleted after inter-bucket copy * Backport #21938: luminous: multisite: data sync status advances despite failure in RGWListBucketIndexesCR * Backport #21939: luminous: list bucket which enable versioning get wrong result when user marker * Backport #21955: luminous: qa: add EC data pool to testing * Backport #21972: luminous: rgw_file: objects have wrong accounted_size * Backport #21974: luminous: ceph-disk dmcrypt does not unlock blockdevice for bluestore * Bug #21983: rgw: modify s3 type subuser access permission fail * Backport #22019: luminous: "ceph osd create" is not idempotent * Backport #22020: luminous: multisite: race between sync of bucket and bucket instance metadata * Backport #22032: luminous: dashboard barfs on nulls where it expects numbers * Backport #22034: luminous: key mismatch for mgr after upgrade from jewel to luminous(dev) * Backport #22035: luminous: Spurious ceph-mgr failovers during mon elections * Backport #22079: luminous: "ceph osd df" crashes ceph-mon if mgr is offline * Support #22132: OSDs stuck in "booting" state after catastrophic data loss * Backport #22150: luminous: bluestore: segv during unmount * Bug #22154: ceph-disk: add deprecation warnings * Backport #22166: luminous: bluestore fsck took 224.778802 seconds to complete which caused "timed out waiting for admin_socket to appear after osd.1 restart" * Bug #22202: rgw_statfs should report the correct stats * Backport #22212: luminous: ceph-disk: add deprecation warnings * Backport #22235: luminous: ceph-disk flake8 test fails on very old, and very new, versions of flake8 * Backport #22251: luminous: macros expanding in spec file comment * Backport #22262: luminous: ceph-disk-test.py:test_activate_multipath fails because nearfull on osd.2 * Backport #22270: ceph df report wrong pool usage pct * Bug #22272: S3 API: incorrect error code on GET website bucket * Backport #22275: luminous: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0) * Bug #22280: ceph-volume - ceph.conf parsing error, due to whitespace * Bug #22297: ceph-volume should handle inline comments in the ceph.conf file * Bug #22298: ceph-volume parsing errors on spaces / tabs * Bug #22311: Luminous 12.2.2: some of rpm packages is not signed. * Bug #22312: ERROR: keystone revocation processing returned error r=-22 on keystone v3 openstack ocata * Bug #22321: ceph 12.2.x Luminous: Build fails with --without-radosgw * Bug #22327: MGR dashboard doesn't update OSD's ceph version after updating from 12.2.1 to 12.2.2 * Bug #22331: Luminous 12.2.2 multisite sync failed when objects bigger than 4MB * Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions * Backport #22391: luminous: thrashosds defaults to min_in 3, some ec tests are (2,2) * Support #22422: Block fsid does not match our fsid * Bug #22435: fail to create bluestore osd with ceph-volume command on ubuntu 14.04 with ceph 12.2.2 * Bug #22441: mgr prometheus module failed on 12.2.2 * Bug #22472: restful module: Unresponsive restful API after marking MGR as failed * Bug #22474: prometheus plugin breaks if an osd is out * Feature #22495: mgr: dashboard: show per pool io * Support #22520: nearfull threshold is not cleared when osd really is not nearfull. * Support #22531: OSD flapping under repair/scrub after recieve inconsistent PG LFNIndex.cc: 439: FAILED assert(long_name == short_name) * Documentation #22532: mgr: balancer: missed documentation * Bug #22543: OSDs can not start after shutdown, killed by OOM killer during PGs load * Support #22566: Some osd remain 100% CPU after upgrade jewel => luminous (v12.2.2) and some work * Backport #22571: luminous: Stale bucket index entry remains after object deletion * Bug #22616: bluestore_cache_data uses too much memory * Bug #22632: radosgw - s3 keystone integration doesn't work while using civetweb as frontend * Bug #22669: KeyError: 'pg_deep' from prometheus plugin * Bug #22678: block checksum mismatch from rocksdb * Bug #22720: Added an osd by ceph-volume,it got an error in systemctl enable ceph-volume@.service * Bug #22735: about mon_max_pg_per_osd * Bug #22746: osd/common: ceph-osd process is terminated by the logratote task * Support #22749: dmClock OP classification * Bug #22757: 色方式 * Backport #22780: luminous: backfill cancelation makes target crash; now triggered by recovery preemption * Bug #22788: ceph-fuse performance issues with rsync * Bug #22804: multisite Synchronization failed when read and write delete at the same time * Support #22822: RGW multi site issue - data sync: ERROR: failed to fetch datalog info * Bug #22826: "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD" is not support by V4 auth through LDAPEngine * Bug #22838: slave zone, `bucket stats`,the result of 'num_objects' is incorrect. * Bug #22895: radosgw not work after all rgw `RGWAsyncRadosProcessor had timed out * Bug #22908: [Multisite] Synchronization works only one way (zone2->zone1) * Support #22917: mon keeps on crashing ( 12.2.2 ) * Bug #22943: c++: internal compiler error: Killed (program cc1plus) * Bug #22952: Monitor stopped responding after awhile * Support #23005: Implement rados for Python library with some problem * Backport #23009: luminous: Filestore rocksdb compaction readahead option not set by default * Bug #23085: Assertion failure in Bluestore _kv_sync_thread() * Bug #23091: rgw + OpenLDAP = Failed the auth strategy, reason=-13 * Bug #23149: Aws::S3::Errors::SignatureDoesNotMatch (rgw::auth::s3::LocalEngine denied with reason=-2027) * Bug #23188: Cannot list file in a bucket or delete it : illegal character code U+0001 * Bug #23206: ceph-osd daemon crashes - *** Caught signal (Aborted) ** * Bug #23209: Strange RGW behaviour after running Cosbench tests (heavy read/write on cluster, then delete objects and dispose buckets) * Bug #23235: The randomness of the hash function causes the object to be inhomogeneous to the PG.The result is that each OSD utilization ratio is uneven. * Bug #23283: os/bluestore:cache arise a Segmentation fault * Bug #23296: Cache-tier forward mode hang in luminous * Bug #23365: CEPH device class not honored for erasure encoding. * Support #23396: erasure-code pool cannot be configured as gnocchi data storage * Support #23401: rbd mirror lead to a potential risk that primary image can be remove from a remote cluster * Support #23456: where is the log of src\journal\JournalPlayer.cc * Support #23457: rbd mirror: entries_behind_master will not be zero after mirror over * Support #23461: which rpm package src/journal/ is in? * Support #23719: Three nodes shutdown,only boot two of the nodes,many pgs down.(host failure-domain,ec 2+1) * Bug #23939: etag/hash and content_type fields were concatenated wit \u0000