# v12.2.0 * Bug #19511: bluestore overwhelms aio queue * Bug #20706: ceph-disk can't activate-block Error: /dev/sdb2 is not a block device * Bug #20889: qa: MDS_DAMAGED not whitelisted properly * Bug #20890: qa: ceph_test_case wait_for_health is parsing health json wrong * Bug #20891: mon: mysterious "application not enabled on pool(s)" * Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites * Bug #20923: ceph-12.1.1/src/os/bluestore/BlueStore.cc: 2630: FAILED assert(last >= start) * Bug #20933: All mon nodes down when i use ceph-disk prepare a new osd. * Bug #20934: rgw: bucket index sporadically reshards to 65521 shards * Bug #20943: rbd: list of watchers not correspond to list of clients * Bug #20945: get_quota_root sends lookupname op for every buffered write * Backport #20961: luminous: OSD and mon scrub cluster log messages are too verbose * Backport #20962: luminous: rgw not responding after receiving SIGHUP signal. * Backport #20963: luminous: pg dump fails during point-to-point upgrade * Backport #20965: luminous: src/common/LogClient.cc: 310: FAILED assert(num_unsent <= log_queue.size()) * Backport #20967: luminous: RGW lifecycle not expiring objects due to permissions on lc pool * Backport #20979: Luminous: Race seen between pool creation and wait_for_clean(): seen in test-erasure-eio.sh * Bug #20990: mds,mgr: add 'is_valid=false' when failed to parse caps * Bug #20998: RHEL74 GA kernel paniced on client node running smallfile tests with 3 active MDS * Bug #20999: rados python library does not document omap API * Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail * Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown * Bug #21008: clone flatten is pending in 4% when it uses ec pool * Documentation #21018: Manual deployment does not work in luminous * Bug #21023: BlueStore-OSDs marked as destroyed in OSD-map after v12.1.1 to v12.1.4 upgrade * Backport #21043: S3 v4 auth fails when query string contains " " * Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps * Backport #21048: luminous: Include front/back interface names in OSD metadata * Backport #21049: luminous: mgr ignores public_network setting * Backport #21051: luminous: Improve size scrub error handling and ignore system attrs in xattr checking * Backport #21054: luminous: rgw: GetObject Tagging needs to exit earlier if the object has no attributes * Backport #21056: RGW: Get Bucket ACL does not honor the s3:GetBucketACL action * Bug #21064: FSCommands: missing wait for osdmap writeable + propose * Bug #21065: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error * Cleanup #21069: client: missing space in some client debug log messages * Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files * Bug #21071: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_ENABLED * Backport #21076: luminous: osd/osd_types.cc: 3574: FAILED assert(lastmap->get_pools().count(pgid.pool())) * Backport #21077: luminous: osd: osd_scrub_during_recovery only considers primary, not replicas * Backport #21079: bug in funciton reweight_by_utilization * Bug #21082: client: the client_lock is not taken for Client::getcwd * Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate * Backport #21090: osd/osd_types.cc: 3574: FAILED assert(lastmap->get_pools().count(pgid.pool())) * Bug #21092: OSD sporadically starts reading at 100% of ssd bandwidth * Backport #21098: luminous: client: the client_lock is not taken for Client::getcwd * Backport #21099: luminous: client: df hangs in ceph-fuse * Backport #21100: luminous: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error * Backport #21101: luminous: FSCommands: missing wait for osdmap writeable + propose * Backport #21102: luminous: mon: "0 stuck requests are blocked > 4096 sec" warning instead of error * Backport #21105: luminous: vstart's -X option no longer works * Backport #21106: luminous: CRUSH crash on bad memory handling * Backport #21132: luminous: qa/standalone/scrub/osd-scrub-repair.sh timeout * Bug #21173: OSD crash trying to decode erasure coded date from corrupted shards * Bug #21174: OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_update) * Bug #21191: ceph: tell mds.* results in warning * Bug #21192: ceph.in: ceph tell gives EINVAL errors * Bug #21193: ceph.in: `ceph tell mds.* injectargs` does not update standbys * Support #21208: Negative Runway in BlueFS? * Bug #21211: 12.2.0,cephfs(meta replica 2, data ec 2+1),ceph-osd coredump * Bug #21221: MDCache::try_subtree_merge() may print N^2 lines of debug message * Bug #21225: ceph-mgr: dashboard and zabbix plugin report wrong values * Bug #21245: target_max_bytes doesn't limit a tiering pool * Bug #21253: Prometheus crash when update * Bug #21260: ceph mgr versions shows active mgr as "Unknown" * Bug #21262: cephfs ec data pool, many osds marked down * Bug #21286: rgw: Can't get default.rgw.meta pool info * Bug #21295: OSD Seg Fault on Bluestore OSD * Bug #21310: src/osd/PG.h: 467: FAILED assert(i->second.need == j->second.need) (bluestore+ec+rbd) * Bug #21314: Ceph OSDs crashing in BlueStore::queue_transactions() using EC * Bug #21318: segv in rocksdb::BlockBasedTable::NewIndexIterator * Bug #21367: 'ZabbixSender' object has no attribute 'hostname' * Bug #21368: rados gateway failed to sync with openstack keystone * Bug #21397: permission denied rados gateway multi-size meta search integration Elasticsearch * Support #21418: failed in cmake * Bug #21430: ceph-fuse blocked OSD op threads => OSD restart loop * Bug #21461: SELinux file_context update causes OSDs to restart when upgrading to Luminous from Kraken or Jewel. * Bug #21469: mgr prometheus plugin name sanitisation is buggy * Bug #21966: class rbd.Image discard----OSError: [errno 2147483648] error discarding region * Bug #22916: OSD crashing in peering * Bug #22984: RGWs crash when I try to set a policy