# v18.2.1 * Backport #59693: reef: metadata in bucket notification include attributes other than x-amz-meta-* * Backport #59731: reef: S3 CompleteMultipartUploadResult has empty ETag element * Backport #61352: reef: Object Ownership Inconsistent * Backport #61439: reef: rgw: GET Bucket fails on renamed bucket on archive zone * Backport #61722: reef: BlueStore::_collection_list latency perf counter error * Backport #61756: reef: avoid ballooning client_mount_timeout by 10x * Backport #61871: reef: rgw: add support http_date if http_x_amz_date is missing for sigv4 * Backport #61873: reef: scan for orphaned rados objects and index entries in rgw suite * Backport #61893: reef: [rbd-mirror] snapshot replayer is shut down with "local image linked to unknown peer" error on force promote * Backport #62029: reef: ceph config set using osd/host mask not working * Backport #62093: reef: active mgr crashes with segfault when running 'ceph osd purge' * Backport #62101: reef: Quincy and pacific fail to compile after Boost upgrade of main * Backport #62113: reef: rbd-mirror: non-primary images not deleted when the primary images are deleted * Backport #62137: reef: rgw_object_lock.cc:The maximum time of bucket object lock is 24855 days * Backport #62306: reef: rgw/syncpolicy: sync status doesn't reflect syncpolicy set * Backport #62315: reef: stale info for s3 Unsupported Header Fields * Documentation #62354: docs: lack of Reef in Platforms ABC tests * Backport #62405: reef: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output * Backport #62419: reef: mds: adjust cap acquistion throttle defaults * Backport #62506: reef: http options cors request on a presigned url does not work on multi-tenant keystone buckets * Bug #62545: cephfs-shell: getxattr fail while the xattr's length > 256 * Backport #62553: reef: libcephsqlite: short reads fill 0s at beginning of buffer * Backport #62570: reef: ceph_fs.h: add separate owner_{u,g}id fields * Backport #62609: reef: mgr: DaemonServer::ms_handle_authentication acquires daemon locks * Backport #62687: reef: hang due to exclusive lock acquisition (STATE_WAITING_FOR_LOCK) racing with blocklisting * Backport #62692: reef: [rbd-mirror] demote snapshot does not get removed * Backport #62733: reef: mds: add TrackedOp event for batching getattr/lookup * Bug #62746: rgw: java_s3tests fails on ObjectTest.testObjectCreateBadMd5InvalidShort * Bug #62747: rgw: crash during test_encryption_sse_c_method_head * Backport #62772: reef: [test] enable default image features (61) in the upgrade suite * Backport #62792: reef: determining SSL port for RGW dashboard by splitting frontend config * Backport #62825: reef: RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage * Bug #62833: [Reads Balancer] osdmaptool with with --read option creates suggestions for primary OSD change even when it's already primary for that PG * Backport #62852: reef: qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." * Backport #62901: reef: mds: log a message when exiting due to asok "exit" command * Backport #62924: reef: policy array empty on rgw swift /info in reef * Backport #62942: reef: [test] bogus POOL_APP_NOT_ENABLED health alert throught the suite * Backport #62944: reef: New radosgw-admin commands to cleanup leftover OLH index entries and unlinked instance objects * Backport #62985: reef: pg_autoscaler warns that a pool has too many pgs when it has the exact right amount * Backport #63042: reef: CVE-2023-43040 - Improperly verified POST keys. * Backport #63045: reef: s3test test_list_buckets_bad_auth fails with Keystone EC2 * Backport #63051: reef: RGW s3website API prefetches data for range requests * Backport #63054: reef: SignatureDoesNotMatch when extra headers start with 'x-amzn' * Backport #63057: reef: high virtual memory consumption when dealing with Chunked Upload * Backport #63060: reef: crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long) * Backport #63061: reef: [test] reproducer for a deadlock which can occur when a watch error is hit while krbd is recovering from a previous watch error * Backport #63082: reef: mon: no mdsmap broadcast after "fs set joinable" is set to true * Backport #63125: reef: osd: Is it necessary to unconditionally increase osd_bandwidth_cost_per_io in mClockScheduler::calc_scaled_cost? * Backport #63143: reef: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) * Bug #63150: Reef: mgr/cephadm: container image name contains tag but should not * Backport #63156: reef: another hang due to exclusive lock acquisition (STATE_WAITING_FOR_LOCK) racing with blocklisting * Backport #63165: reef: pybind/mgr/volumes: Log missing mutexes to help debug * Backport #63168: reef: "AssertionError: assert 'client' in role" in upgrade stress-split tests * Backport #63189: reef: [test] drop cache tiering from the test matrix * Backport #63228: reef: ceph-mgr seg faults when testing for rbd_support module recovery on repeated blocklisting of its client * Backport #63252: reef: Add bucket versioning info to radosgw-admin bucket stats output * Backport #63350: reef: "rbd feature disable" remote request hangs when proxied to rbd-nbd * Backport #63371: reef: use-after-move in OSDService::build_incremental_map_msg() * Backport #63384: reef: mgr/rbd_support: recovery from client blocklisting halts after MirrorSnapshotScheduleHandler tries to terminate its run thread * Backport #63387: reef: [test][rbd] test recovery of rbd_support module from repeated blocklisting of its client * Backport #63413: reef: mon/MDSMonitor: metadata not loaded from PAXOS on update * Backport #63418: reef: mds: client request may complete without queueing next replay request * Backport #63452: reef: multisite: objects replicated with compress-encrypted store wrong (compressed) size in the bucket index * Backport #63470: reef: mgr/dashboard: fix rgw multi-site import form helper * Backport #63476: reef: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid * Backport #63568: reef: mgr/dashboard: Graphs in Grafana Dashboard are not showing consistent line graphs after upgrading from RHCS 4 to 5. * Backport #63661: reef: Typo in reshard example * Backport #63757: reef: Allocator configured with 64K alloc unit might get 4K requests * Backport #63760: reef: hybrid/avl allocators might be very ineffective when serving bluefs allocations