# v16.2.15 * Backport #50697: pacific: common: the dump of thread IDs is in dec instead of hex * Backport #50831: pacific: pacific ceph-mon: mon initial failed on aarch64 * Backport #51195: pacific: [rfe] increase osd_max_write_op_reply_len default value to 64 bytes * Backport #51653: pacific: vstart_runner: use FileNotFoundError instead of OSError * Backport #51654: pacific: vstart_runner: log level gets set to INFO when --debug and --clear-old-log are passed * Backport #51790: pacific: mgr/nfs: move nfs doc from cephfs to mgr * Backport #52286: pacific: aws-s3 incompatibility related metadata * Backport #52307: pacific: doc: clarify use of `rados rm` command * Backport #52557: pacific: pybind: rados.RadosStateError raised when closed watch object goes out of scope after cluster shutdown * Backport #52728: pacific: Federated user can modify policies in other tenants * Backport #52778: pacific: make fetching of certs while validating tokens, more generic. * Backport #52784: pacific: Session policy evaluation incorrect for CreateBucket. * Backport #52785: pacific: rgw/sts: chunked upload fails using STS temp credentials generated by GetSessionToken for a user authenticated by LDAP/Keystone. * Backport #52839: pacific: rados: build minimally when "WITH_MGR" is off * Backport #52841: pacific: shard-threads cannot wakeup bug * Backport #53152: pacific: 'radosgw-admin bi purge' unable to delete index if bucket entrypoint doesn't exist * Backport #53165: pacific: qa/vstart_runner: tests crashes due incompatiblity * Backport #53648: pacific: assumed-role: s3api head-object returns 403 Forbidden, even if role has ListBucket, for non-existent object, patch in https://tracker.ceph.com/issues/49780 inconsistent with AWS * Backport #53658: pacific: rgw: wrong UploadPartCopy error code when src object not exist and src bucket not exist * Backport #53866: pacific: notification tests failing: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' * Backport #54097: pacific: mgr_util: buggy to_pretty_timedelta * Backport #54281: pacific: mgr/stats: ZeroDivisionError * Backport #54497: pacific: bucket index completions may not retry after reshard * Backport #55063: pacific: Attempting to modify bucket sync pipe results in segfault * Backport #55149: pacific: rgw: Update "CEPH_RGW_DIR_SUGGEST_LOG_OP" for remove entries * Backport #55500: pacific: Segfault when Open Policy Agent authorization is enabled * Backport #55541: pacific: should use TCMalloc for better performance * Backport #55613: pacific: GetBucketTagging returns wrong NoSuchTagSetError instead of NoSuchTagSet * Backport #55701: pacific: `radosgw-admin user modify --placement-id` crashes without `--storage-class` * Backport #55702: pacific: Metadata synchronization failed,"metadata is behind on 1 shards" appear * Backport #55830: pacific: RGW enable ops log, when max backlog reached, unable to read any data from rgw_ops_log_socket_path * Backport #55960: pacific: Exception when running 'rook' task. * Backport #56635: pacific: log_max_recent setting broken as of Nautilus * Backport #56649: pacific: [Progress] Do not show NEW PG_NUM value for pool if autoscaler is set to off * Backport #56678: pacific: cls_rgw: nonexists object shoud not be accounted when check_index * Backport #56734: pacific: unessesarily long laggy PG state * Backport #57110: pacific: mds: handle deferred client request core when mds reboot * Backport #57157: pacific: doc: update snap-schedule notes regarding 'start' time * Backport #57199: pacific: rgw: 'bucket check' deletes index of multipart meta when its pending_map is noempty * Backport #57208: pacific: lazy_omap_stats_test: "ceph osd deep-scrub all" hangs * Backport #57238: pacific: crash: RGWCoroutinesStack::wakeup() * Backport #57260: pacific: mgr(snap-schedule): may TypeError in rm_schedule * Backport #57315: pacific: add an asok command for pg log investigations * Backport #57474: pacific: mgr: FAILED ceph_assert(daemon != nullptr) * Backport #57476: pacific: mgr/nfs: fix output message of `nfs cluster create/rm` command * Backport #57624: pacific: mgr/dashboard: expose num repaired objects metric per pool * Backport #57635: pacific: RGW crash due to PerfCounters::inc assert_condition during multisite syncing * Backport #57776: pacific: Clarify security implications of path-restricted cephx capabilities * Backport #57794: pacific: intrusive_lru leaking memory when * Backport #57839: pacific: mgr/dashboard: prometheus: change name of pg_repaired_objects * Backport #57887: pacific: mgr/prometheus: avoid duplicates and deleted entries for rbd_stats_pool * Backport #58036: pacific: pubsub test failures * Backport #58211: pacific: Improve performance of multi-object delete by handling individual object deletes concurrently * Backport #58234: pacific: s3:ListBuckets response limited to 1000 buckets (by default) since Octopus * Backport #58238: pacific: beast frontend crashes on exception from socket.local_endpoint() * Backport #58260: pacific: rados: fix extra tabs on warning for pool copy * Backport #58333: pacific: mon/monclient: update "unable to obtain rotating service keys when osd init" to suggest clock sync * Backport #58337: pacific: mon-stretched_cluster: degraded stretched mode lead to Monitor crash * Backport #58478: pacific: RGW service crashes regularly with floating point exception * Backport #58495: pacific: rgw: remove guard_reshard in bucket_index_read_olh_log * Backport #58508: pacific: rgw-orphan-list tool can list all rados objects as orphans * Backport #58584: pacific: Keys returned by Admin API during user creation on secondary zone not valid * Backport #58787: pacific: rgw: lc: lc for a single large bucket can run too long * Backport #58805: pacific: ceph mgr fail after upgrade to pacific * Backport #58817: pacific: rgw: some operations may not have a valid bucket object * Backport #58829: pacific: mgr/dashboard: update bcrypt dep in requirements.txt * Backport #58902: pacific: PostObj may incorrectly return 400 EntityTooSmall * Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) * Backport #59001: pacific: cephfs_mirror: local and remote dir root modes are not same * Backport #59026: pacific: relying on boost flatmap emplace behavior is risky * Backport #59035: pacific: MDS allows a (kernel) client to exceed the xattrs key/value limits * Backport #59066: pacific: RadosGW Multipart Cleanup Failure * Backport #59131: pacific: DeleteObjects response does not include DeleteMarker/DeleteMarkerVersionId * Backport #59178: pacific: BLK/Kernel: Improve protection against running one OSD twice * Backport #59361: pacific: metadata cache: if a watcher is disconnected and reinit() fails, it won't be retried again * Backport #59417: pacific: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread * Backport #59492: pacific: test_librgw_file.sh crashes: src/tcmalloc.cc:332] Attempt to free invalid pointer 0x55e8173eebd0 * Backport #59568: pacific: Multipart re-uploads cause orphan data * Backport #59579: pacific: On version 17.2.5-8.el9cp, "Segmentation fault" while uploading object(regular/multipart) on FIPS enabled cluster. * Backport #59610: pacific: sts: every AssumeRole writes to the RGWUserInfo * Backport #59692: pacific: metadata in bucket notification include attributes other than x-amz-meta-* * Backport #59700: pacific: mon: FAILED ceph_assert(osdmon()->is_writeable()) * Backport #59729: pacific: S3 CompleteMultipartUploadResult has empty ETag element * Backport #61166: pacific: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 pending pAsLsXsFs issued pAsLsXsFs, sent 62.303702 seconds ago * Backport #61175: pacific: Swift static large objects are not deleted when segment object path set in manifest file does not start with '/' * Backport #61340: pacific: mgr/prometheus: fix pool_objects_repaired and daemon_health_metrics format * Backport #61351: pacific: Object Ownership Inconsistent * Backport #61433: pacific: rgw: multisite data log flag not used * Backport #61602: pacific: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) * Backport #61723: pacific: BlueStore::_collection_list latency perf counter error * Backport #61728: pacific: beast: add max_header_size option * Bug #61732: pacific: test_cluster_info fails from "No daemons reported" * Backport #61755: pacific: avoid ballooning client_mount_timeout by 10x * Backport #61803: pacific: Better help message for cephfs-journal-tool -help command for --rank option. * Backport #61812: pacific: mon-stretched_cluster: Site weights are not monitored post stretch mode deployment * Backport #61822: pacific: osdmaptool crush * Backport #61829: pacific: qa: test_join_fs_vanilla is racy * Backport #61872: pacific: rgw: add support http_date if http_x_amz_date is missing for sigv4 * Backport #62028: pacific: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or gids * Backport #62038: pacific: add --osd-id parameter support to ceph-volume raw prepare * Backport #62062: pacific: ceph-volume lvm new-db requires 'bluestore-block-db-size' parameter * Backport #62091: pacific: active mgr crashes with segfault when running 'ceph osd purge' * Backport #62138: pacific: rgw_object_lock.cc:The maximum time of bucket object lock is 24855 days * Backport #62154: pacific: the 'work around phantom atari partitions' code is broken * Backport #62268: pacific: qa: _test_stale_caps does not wait for file flush before stat * Backport #62300: pacific: retry metadata cache notifications with INVALIDATE_OBJ * Backport #62308: pacific: rgw/syncpolicy: sync status doesn't reflect syncpolicy set * Backport #62337: pacific: MDSAUthCaps: use g_ceph_context directly * Backport #62406: pacific: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output * Backport #62413: pacific: qa/sts: test_list_buckets_invalid_auth and test_list_buckets_bad_auth fail * Backport #62427: pacific: nofail option in fstab not supported * Backport #62505: pacific: http options cors request on a presigned url does not work on multi-tenant keystone buckets * Backport #62517: pacific: mds: inode snaplock only acquired for open in create codepath * Backport #62520: pacific: client: FAILED ceph_assert(_size == 0) * Backport #62555: pacific: libcephsqlite: short reads fill 0s at beginning of buffer * Backport #62572: pacific: mds: add cap acquisition throttled event to MDR * Backport #62584: pacific: mds: enforce a limit on the size of a session in the sessionmap * Bug #62586: TestClsRbd.mirror_snapshot failure in pacific p2p * Backport #62608: pacific: mgr: DaemonServer::ms_handle_authentication acquires daemon locks * Backport #62621: pacific: Add omit_usage query param to dashboard API endpoint for getting RBD info * Backport #62662: pacific: mds: deadlock when getattr changes inode lockset * Backport #62686: pacific: hang due to exclusive lock acquisition (STATE_WAITING_FOR_LOCK) racing with blocklisting * Backport #62691: pacific: [rbd-mirror] demote snapshot does not get removed * Backport #62731: pacific: mds: add TrackedOp event for batching getattr/lookup * Backport #62751: pacific: Object with null version when using versioning and transition * Backport #62779: pacific: btree allocator doesn't pass alloctor's UTs * Backport #62807: pacific: doc: write cephfs commands in full * Backport #62818: pacific: osd: choose_async_recovery_ec may select an acting set < min_size * Backport #62823: pacific: RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage * Backport #62843: pacific: Lack of consistency in time format * Backport #62854: pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." * Backport #62865: pacific: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" * Backport #62879: pacific: cephfs-shell: update path to cephfs-shell since its location has changed * Backport #62890: pacific: pg_autoscaler counting pools uncompressed bytes as total_bytes triggering false POOL_TARGET_SIZE_BYTES_OVERCOMMITTED warnings * Backport #62897: pacific: client: evicted warning because client completes unmount before thrashed MDS comes back * Backport #62902: pacific: mds: log a message when exiting due to asok "exit" command * Backport #62906: pacific: mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5 * Backport #62916: pacific: client: syncfs flush is only fast with a single MDS * Backport #62928: pacific: BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length) * Backport #62945: pacific: New radosgw-admin commands to cleanup leftover OLH index entries and unlinked instance objects * Backport #62949: pacific: cephfs-mirror: do not run concurrent C_RestartMirroring context * Backport #62996: pacific: Add detail description for delayed op in osd log file * Backport #63023: pacific: AsyncMessenger::wait() isn't checking for spurious condition wakeup * Backport #63035: pacific: The throttle parameter of osd does not take effect for mgr * Backport #63040: pacific: CVE-2023-43040 - Improperly verified POST keys. * Backport #63043: pacific: s3test test_list_buckets_bad_auth fails with Keystone EC2 * Backport #63049: pacific: RGW s3website API prefetches data for range requests * Backport #63052: pacific: SignatureDoesNotMatch when extra headers start with 'x-amzn' * Backport #63055: pacific: high virtual memory consumption when dealing with Chunked Upload * Backport #63058: pacific: crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long) * Backport #63062: pacific: [test] reproducer for a deadlock which can occur when a watch error is hit while krbd is recovering from a previous watch error * Backport #63144: pacific: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) * Backport #63157: pacific: another hang due to exclusive lock acquisition (STATE_WAITING_FOR_LOCK) racing with blocklisting * Backport #63164: pacific: pybind/mgr/volumes: Log missing mutexes to help debug * Backport #63173: pacific: crash: void MDLog::trim(int): assert(segments.size() >= pre_segments_size) * Bug #63177: RGW user quotas is not honored when bucket owner is different than uploader * Backport #63179: pacific: Share mon's purged snapshots with OSD * Backport #63226: pacific: ceph-mgr seg faults when testing for rbd_support module recovery on repeated blocklisting of its client * Backport #63254: pacific: Add bucket versioning info to radosgw-admin bucket stats output * Backport #63283: pacific: client: crash during upgrade from octopus to quincy (or from pacific to reef) * Backport #63311: pacific: report "Insufficient space (<5GB)" even when disk size is sufficient * Bug #63327: compiler cython error * Bug #63345: install_dep.sh error * Backport #63351: pacific: "rbd feature disable" remote request hangs when proxied to rbd-nbd * Backport #63366: pacific: mgr: remove out&down osd from mgr daemons to avoid warnings * Backport #63382: pacific: mgr/rbd_support: recovery from client blocklisting halts after MirrorSnapshotScheduleHandler tries to terminate its run thread * Backport #63385: pacific: [test][rbd] test recovery of rbd_support module from repeated blocklisting of its client * Backport #63401: pacific: pybind: ioctx.get_omap_keys asserts if start_after parameter is non-empty * Backport #63406: pacific: cephfs: print better error message when MDS caps perms are not right * Backport #63414: pacific: mon/MDSMonitor: metadata not loaded from PAXOS on update * Backport #63419: pacific: mds: client request may complete without queueing next replay request * Backport #63441: pacific: resharding RocksDB after upgrade to Pacific breaks OSDs * Backport #63478: pacific: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid * Bug #63493: Problem with Pgs Deep-scrubbing ceph * Backport #63512: pacific: client: queue a delay cap flushing if there are ditry caps/snapcaps * Backport #63513: pacific: MDS slow requests for the internal 'rename' requests * Backport #63551: pacific: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid * Backport #63570: pacific: mgr/dashboard: Graphs in Grafana Dashboard are not showing consistent line graphs after upgrading from RHCS 4 to 5. * Backport #63588: pacific: qa: fs:mixed-clients kernel_untar_build failure * Backport #63600: pacific: RBD cloned image is slow in 4k write with "waiting for rw locks" * Bug #63606: pacific: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/2 triggers FAILED ceph_assert(_kv_only || mounted) * Backport #63624: pacific: SignatureDoesNotMatch for certain RGW Admin Ops endpoints when using v4 auth * Backport #63649: pacific: Ceph-object-store to skip getting attrs of pgmeta objects * Backport #63660: pacific: Typo in reshard example * Backport #63677: pacific: ceph-volume prepare doesn't use partitions as-is anymore * Backport #63714: pacific: qa/workunits/rbd/cli_generic.sh: rbd support module command not failing as expected after module's client is blocklisted * Backport #63736: pacific: [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJECT behavior is broken * Backport #63745: pacific: librbd crash in journal discard wait_event * Backport #63759: pacific: Allocator configured with 64K alloc unit might get 4K requests * Backport #63762: pacific: hybrid/avl allocators might be very ineffective when serving bluefs allocations * Backport #63787: pacific: [rgw][lc] using custom lc schedule (work time) may cause lc processing to stall * Backport #63832: pacific: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps * Backport #63833: pacific: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock * Backport #63846: pacific: diff-iterate can report holes when diffing against the beginning of time (fromsnapname == NULL) * Support #63852: How diagnose gisbalance between PG filling * Backport #63878: pacific: tools/ceph_objectstore_tool: Support get/set/superblock * Backport #63898: pacific: get_pool_is_selfmanaged_snaps_mode() API is broken by design * Backport #63923: pacific: mgr/volumes: fix `subvolume group rm` command error message * Backport #63963: pacific: sha256sum mismatch for boost_1_82_0.tar.bz2 in Shaman builds * Backport #63975: pacific: Observing client.admin crash in thread_name 'rados' on executing 'rados clearomap..' * Backport #63978: pacific: memory leak (RESTful module, maybe others?) * Backport #63980: pacific: crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty()) * Backport #64006: pacific: weighted_shuffle() can provide std::discrete_distribution with all-zero weights * Backport #64108: pacific: improve rbd_diff_iterate2() performance in fast-diff mode * Bug #64203: RGW S3: list bucket results in a 500 Error when object-lock is enabled * Bug #64256: "Cannot download repodata/repomd.xml: All mirrors were tried' rc: 1 results: []" in pacific-x-quincy * Bug #64279: "Error ENOTSUP: Warning: due to ceph-mgr restart" in octopus-x/pacific suite * Bug #64311: pacific: reinforce spawn_worker of msg/async * Backport #64338: pacific: ceph-volume fails to zap encrypted journal device on partitions * Backport #64362: pacific: BuildRocksDB.cmake doesn't pass optimization flags * Backport #64395: pacific: mon: health store size growing infinitely * Backport #64404: pacific: CORS Preflight Failure After Upgrading to 17.2.7 * Backport #64427: pacific: rgw: rados objects wrongly deleted * Backport #64469: pacific: tasks.cephadm: ceph.log No such file or directory * Bug #65179: rgw incorrectly uses `Range` header in `X-Amz-Cache`