# v12.2.12 * Backport #23670: luminous: auth: ceph auth add does not sanity-check caps * Backport #26913: luminous: "balancer execute" only requires read permissions * Backport #36434: luminous: monstore tool rebuild does not generate creating_pgs * Backport #36692: luminous: [rbd-mirror] forced promotion after killing remote cluster results in stuck state * Backport #37481: luminous: mds: MDCache.cc: 11673: abort() * Backport #37557: luminous: multisite: es sync null versioned object failed because of olh info * Backport #37559: luminous: presigned URL for PUT with metadata fails: SignatureDoesNotMatch * Backport #37561: luminous: radosgw coredump RGWGC::process * Backport #37648: luminous: rgw: resharding events are not logged * Backport #37690: luminous: ceph-objectstore-tool: Add HashInfo to object dump output * Backport #37756: luminous: osd/PrimaryLogPG: fix the extent length error of the sync read * Backport #37760: luminous: mds: mds state change race * Backport #37815: luminous: workunits/rados/test_health_warnings.sh fails with <9 osds down * Backport #37823: luminous: mds: output client IP of blacklisted/evicted clients to cluster log * Backport #37825: luminous: BlueStore: ENODATA not fully handled * Backport #37833: luminous: FAILED assert(is_up(osd)) in OSDMap::get_inst(int) * Backport #37889: luminous: osd/OSDMap: calc_pg_upmaps - potential access violation * Backport #37897: luminous: msg/async: mark_down vs accept race leaves connection registered * Backport #37901: luminous: [journal] max journal order is incorrectly set at 64 * Backport #37905: luminous: FAILED ceph_assert(can_write == WriteStatus::NOWRITE) in ProtocolV1::replace() * Backport #37908: luminous: mds: wait shorter intervals to send beacon if laggy * Backport #37926: luminous: Broken parameter parsing in /etc/ceph/rbdmap * Backport #37937: luminous: librgw: export multitenancy support * Backport #37972: luminous: FreeBSD/Linux integration - monitor map with wrong sa_family * Backport #37987: luminous: Throttle.cc: 194: FAILED assert(c >= 0) due to invalid ceph_osd_op union * Backport #37989: luminous: MDSMonitor: missing osdmon writeable check * Backport #37991: luminous: Compression not working, and when applied OSD disks are failing randomly * Backport #37993: luminous: ec pool lost data due to snap clone * Backport #38037: luminous: upmap balancer won't refill underfull osds if zero overfull found * Backport #38038: luminous: RGW fails to start on Fedora 28 from default configuration * Backport #38046: luminous: qa/overrides/short_pg_log.yaml: reduce osd_{min,max}_pg_log_entries * Backport #38073: luminous: build/ops: Allow multi instances of "make tests" on the same machine * Backport #38079: luminous: multisite: bucket full sync does not handle delete markers * Backport #38081: luminous: multisite: overwrites in versioning-suspended buckets fail to sync * Backport #38084: luminous: mds: log new client sessions with various metadata * Backport #38095: luminous: doc/rados/configuration: refresh osdmap section * Backport #38098: luminous: mds: optimize revoking stale caps * Backport #38102: luminous: mds: cache drop should trim cache before flushing journal * Backport #38104: luminous: client: session flush does not cause cap release message flush * Backport #38105: luminous: osd/ECBackend.cc: 1547: FAILED ceph_assert(!(*m).is_missing(hoid)) * Backport #38108: luminous: Adding back the IOPS line for client and recovery IO in cluster logs * Backport #38130: luminous: mds: provide a limit for the maximum number of caps a client may have * Backport #38132: luminous: mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats * Backport #38140: luminous: Add hashinfo testing for dump command of ceph-objectstore-tool * Backport #38142: luminous: os/bluestore: fixup access a destroy cond cause deadlock or undefine behaviors * Backport #38148: luminous: rgw: `radosgw-admin bucket rm ... --purge-objects` can hang... * Backport #38162: luminous: maybe_remove_pg_upmaps incorrectly cancels valid pending upmaps * Backport #38186: luminous: krbd discard no longer guarantees zeroing * Backport #38188: luminous: deep fsck fails on inspecting very large onodes * Backport #38190: luminous: mds: broadcast quota message to client when disable quota * Backport #38193: luminous: Object can still be deleted even if s3:DeleteObject policy is set * Backport #38207: luminous: A PG repairing doesn't mean PG is damaged * Backport #38232: luminous: rgw: Recent commit to master broke the s390x build * Backport #38240: luminous: radosbench tests hit ENOSPC * Backport #38244: luminous: scrub warning check incorrectly uses mon scrub interval * Backport #38257: luminous: restful: py got exception when get osd info * Bug #38271: luminous: 12.2.11 link errors on Fedora 28 and 29 on s390x * Backport #38274: luminous: Fix recovery and backfill priority handling * Backport #38316: luminous: filestore: fsync(2) return value not checked * Backport #38318: luminous: mgr deadlock: _check_auth_rotating possible clock skew, rotating keys expired way too early * Backport #38336: luminous: mds: fix potential re-evaluate stray dentry in _unlink_local_finish * Backport #38338: luminous: New Bionic install fails qa/standalone/ceph-helpers.sh * Backport #38354: luminous: rgw: GetBucketAcl on non-existing bucket doesn't throw NoSuchBucket * Bug #38365: luminous: ceph-volume: add osd_ids argument * Backport #38400: luminous: rados_shutdown hang forever in ~objecter() * Backport #38410: luminous: rgw: fix cls_bucket_head result order consistency * Backport #38412: luminous: multisite: rgw_data_sync_status json decode failure breaks automated datalog trimming * Backport #38414: luminous: rgw:versioning:fix versioning concurrent bug, supplement using of olh.ver * Backport #38423: luminous: osd/TestPGLog.cc: Verify that dup_index is being trimmed * Backport #38446: luminous: multisite: datalog trim may not trim to completion * Backport #38449: luminous: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r) * Backport #38460: luminous: deadlock in standby ceph-mgr daemons * Bug #38488: luminous: mds: message invalid access * Backport #38501: luminous: only first subuser can be exported to nfs * Backport #38506: luminous: ENOENT on setattrs (obj was recently deleted) * Backport #38510: luminous: ceph CLI ability to change file ownership * Backport #38529: luminous: multisite: memory growth from RGWCoroutinesStacks on lease errors * Backport #38541: luminous: qa: fsstress with valgrind may timeout * Backport #38543: luminous: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients cannot unmount * Backport #38545: luminous: qa: "Loading libcephfs-jni: Failure!" * Backport #38562: luminous: mgr deadlock * Backport #38576: luminous: cherrypy binding anyaddr without ipv6 * Backport #38586: luminous: OSD crashes in get_str_map while creating with ceph-volume * Feature #38603: mon: osdmap prune * Backport #38608: luminous: reduce log shard config for multisite tests * Backport #38647: luminous: Non-existent config option osd_deep_mon_scrub_interval * Backport #38663: luminous: mimic: Unable to recover from ENOSPC in BlueFS * Backport #38665: luminous: qa: powercycle suite reports MDS_SLOW_METADATA_IO * Backport #38667: luminous: I can delete a public-read-write bucket which is belong to other user, is this right? * Backport #38669: luminous: "log [WRN] : Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" * Backport #38671: luminous: rgw: sync module: avoid verbose attr logging for objects * Bug #38683: OSDMapRef access by multiple threads is unsafe * Backport #38690: luminous: rgw: es: add support for ES endpoints with password * Backport #38692: luminous: rgw: elastic plugin doesn't seem to work with ES 6 * Backport #38694: luminous: Radosgw elastic search sync module not working properly (all result same) * Backport #38695: luminous: rgw: es: some meta attrs might be trimmed * Backport #38696: luminous: rgw: es: cannot query by content_type * Feature #38708: Rados import is potentially dangerous and should need confirmation * Backport #38727: luminous: radosgw-admin bucket limit check stuck generating high read ops with > 999 buckets per user * Backport #38735: luminous: qa: tolerate longer heartbeat timeouts when using valgrind * Backport #38755: luminous: rgw:ldap: fix early return in LDAPAuthEngine::init w/uri not empty() * Backport #38771: luminous: rgw: nfs: process asserts on empty path name segment (e.g., s3://myfiles//data/file.pdf) * Backport #38778: luminous: ceph_test_objecstore: bluefs mount fail with overlapping op_alloc_add * Backport #38796: luminous: How to configure user- or bucket-specific data placement * Backport #38854: luminous: .mgrstat failed to decode mgrstat state; luminous dev version? * Backport #38857: luminous: should set EPOLLET flag on del_event() * Backport #38859: luminous: upmap broken the crush rule * Backport #38911: luminous: Bitmap allocator might fail to return contiguous chunk despite having enough space * Backport #38965: luminous: src/osd/OSDMap.cc: 4405: FAILED assert(osd_weight.count(i.first)) * Bug #39055: OSD's crash when specific PG is trying to backfill * Backport #39070: luminous: silent corruption using SSE-C on multi-part upload to S3 with non-default part size * Backport #39073: luminous: multisite: data sync loops back to the start of the datalog after reaching the end * Bug #39136: ceph balancer upmap unable to optimize mixed sized osds. * Bug #39392: [mgr][balancer] Error EAGAIN: Too many objects (0.000000 > 0.000000) are misplaced; try again later * Bug #39938: Issues with CephFS kernel driver * Bug #39945: RBD I/O error leads to ghost-mapped RBD * Bug #39970: radosgw-admin reshard process reports invalid argument * Support #40205: [librgw] Change the administrative metadata settings of a upload object * Bug #40425: [librgw]Can't get the correct the Unix file attribute of root node * Documentation #40458: Object Gateway multisite document read-only argument error * Bug #40582: cephfs-journal-tool: Error 22 ((22) Invalid argument) * Bug #40622: PG stuck in active+clean+remapped * Bug #40794: [RGW] Active bucket marker in stale instances list * Bug #43805: bucket lifecycle breaks down when master-zone changed or period gets updated * Bug #44072: Add new Bluestore OSDs to Filestore cluster leads to scrub errors (union_shard_errors=missing) * Bug #44525: LibCephFS::RecalledGetattr test failed * Bug #44528: remove sprious whitespace from test_snapshot.py * Bug #44707: Maximum limit of lifecycle rule length * Bug #44902: ceph-fuse read the file cached data when we recover from the blacklist * Bug #45030: Empty NextVersionIdMarker when it must be "null" * Bug #45456: The read and write operations of libceph are blocked waiting for IO to complete, causing a deadlock * Bug #45562: soft lockup stuck for 22s! in ceph.ko and code stack is 'destroy_inode->ceph_destroy_inode->__ceph_remove_cap->_raw_spin_lock' * Bug #45563: __list_add_valid kernel NULL pointer in _ceph_remove_cap * Bug #45809: When out a osd, the `MAX AVAIL` doesn't change. * Bug #45903: BlueFS replay log grows without end