# v18.0.0 Reef dev * Bug #16745: mon: prevent allocating snapids allocated for CephFS * Bug #16767: RadosGW Multipart Cleanup Failure * Bug #24894: client: allow overwrites to files with size greater than the max_file_size config * Bug #44092: mon: config commands do not accept whitespace style config name * Bug #46438: mds: add vxattr for querying inherited layout * Feature #48619: client: track (and forward to MDS) average read/write/metadata latency * Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure * Bug #49823: rgw gc object leak when gc omap set entry failed with a large omap value * Feature #50150: qa: begin grepping kernel logs for kclient warnings/failures to fail a test * Bug #50974: rgw: storage class: GLACIER lifecycle don't worked when STANDARD pool and GLACIER pool are equal * Documentation #51459: doc: document what kinds of damage forward scrub can repair * Feature #51537: use git `Prepare Commit Message` hook to add component in commit title * Bug #52260: 1 MDSs are read only | pacific 16.2.5 * Bug #52982: client: Inode::hold_caps_until should be a time from a monotonic clock * Bug #53504: client: infinite loop "got ESTALE" after mds recovery * Bug #53573: qa: test new clients against older Ceph clusters * Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version()) * Cleanup #53682: common: use fmt::print for stderr logging * Bug #53729: ceph-osd takes all memory before oom on boot * Bug #53811: standby-replay mds is removed from MDSMap unexpectedly * Bug #53950: mgr/dashboard: Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log * Bug #53951: cluster [ERR] Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log * Bug #53986: mgr/prometheus: The size of the export is not tracked as a metric returned to Prometheus * Bug #53996: qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or quincy instead of pacific * Bug #54017: Problem with ceph fs snapshot mirror and read-only folders * Bug #54026: the sort sequence used by 'orch ps' is not in a natural sequence * Bug #54028: alertmanager clustering is not configured consistently * Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps * Bug #54049: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expected to add in /proc/self/mounts and command should return failure * Bug #54052: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart * Bug #54066: mgr/volumes: uid/gid of the clone is incorrect * Bug #54067: fs/maxentries.sh test fails with "2022-01-21T12:47:05.490 DEBUG:teuthology.orchestra.run:got remote process result: 124" * Bug #54081: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v16.2.4 -> v16.2.[567] * Bug #54106: kclient: hang during workunit cleanup * Bug #54107: kclient: hang during umount * Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" * Bug #54111: data pool attached to a file system can be attached to another file system * Fix #54174: rgw dbstore test env init wrong * Bug #54237: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path (in msg) while raising exception from opendir() in cephfs.pyx * Feature #54308: monitoring/prometheus: mgr/cephadm should support a data retention spec for prometheus data * Feature #54309: cephadm/monitoring: Update cephadm web endpoint to provide scrape configuration information to Prometheus * Feature #54310: cephadm: allow services to have dependencies on rbd * Bug #54311: cephadm/monitoring: monitoring stack versions are too old * Fix #54317: qa: add testing in fs:workload for different kinds of subvolumes * Bug #54325: lua: elasticsearch example script does not check for null object/bucket * Bug #54345: mds: try to reset heartbeat when fetching or committing. * Cleanup #54362: client: do not release the global snaprealm until unmounting * Bug #54374: mgr/snap_schedule: include timezone information in scheduled snapshots * Bug #54384: mds: crash due to seemingly unrecoverable metadata error * Feature #54391: orch/cephadm: upgrade status output could be improved to make progress more transparent * Feature #54392: orch/cephadm: Add a 'history' subcommand to the orch upgrade command * Bug #54459: fs:upgrade fails with "hit max job timeout" * Bug #54460: snaptest-multiple-capsnaps.sh test failure * Bug #54461: ffsb.sh test failure * Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied * Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes * Feature #54476: rgw: allow S3 delete-marker behavior to be restored via config * Bug #54499: rgw: Update "CEPH_RGW_DIR_SUGGEST_LOG_OP" for remove entries * Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created and deleted * Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work * Bug #54560: snap_schedule: avoid throwing traceback for bad or missing arguments * Feature #54580: common/options: add FLAG_SECURE to Ceph options * Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression? * Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t&, CInode*, CDentry*, MDRequestRef&): assert(dnl->get_inode() == in) * Bug #54760: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null()) * Bug #54971: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) * Bug #54976: mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery) * Feature #54978: cephfs-top:addition of filesystem menu(improving GUI) * Cleanup #54991: mgr/dashboard: don't log HTTP 3xx as errors * Bug #55029: mgr/prometheus: ceph_mon_metadata is not consistently populating the ceph_version * Bug #55041: mgr/volumes: display in-progress clones for a snapshot * Bug #55107: Getting "Could NOT find utf8proc (missing: utf8proc_LIB)" error while building from master branch * Bug #55110: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to older kernel * Bug #55112: cephfs-shell: saving files doesn't work as expected * Feature #55121: cephfs-top: new options to limit and order-by * Bug #55133: mgr/dashboard: Error message of /api/grafana/validation is not helpful * Bug #55134: ceph pacific fails to perform fs/mirror test * Bug #55148: snap_schedule: remove subvolume(-group) interfaces * Bug #55170: mds: crash during rejoin (CDir::fetch_keys) * Bug #55173: qa: missing dbench binary? * Feature #55197: cephfs-top: make cephfs-top display scrollable like top * Backport #55201: cephadm/monitoring: monitoring stack versions are too old * Feature #55215: mds: fragment directory snapshots * Bug #55216: cephfs-shell: creates directories in local file system even if file not found * Bug #55217: pybind/mgr/volumes: Clone operation hangs * Bug #55234: snap_schedule: replace .snap with the client configured snap dir name * Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS * Bug #55242: cephfs-shell: put command should accept both path mandatorily and validate local_path * Bug #55258: lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs * Bug #55304: libcephsqlite: crash when compiled with gcc12 cause of regex treating '-' as a range operator * Bug #55313: Unexpected file access behavior using ceph-fuse * Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MDSes) * Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) * Bug #55351: ceph-mon crash in handle_forward when add new message type * Bug #55354: cephfs: xfstests-dev can't be run against fuse mounted cephfs * Bug #55355: osd thread deadlock * Backport #55366: cephadm/monitoring: monitoring stack versions are too old * Bug #55377: kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once * Feature #55401: mgr/volumes: allow users to add metadata (key-value pairs) for subvolume snapshot * Feature #55463: cephfs-top: allow users to chose sorting order * Feature #55470: qa: postgresql test suite workunit * Bug #55476: rgw: remove entries from bucket index shards directly in limited cases * Bug #55477: Gloal Ratelilmit is overriding the per user ratelimit * Feature #55489: cephadm: Improve gather facts to tolerate mpath device configurations * Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)" * Feature #55520: mgr/dashboard: Add `location` field to [ POST /api/host ] * Documentation #55530: teuthology-suite -k option doesn't always override kernel * Bug #55538: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) * Bug #55546: rgw: trigger dynamic reshard on index entry count rather than object count * Bug #55547: rgw: figure out what to do with "--check-objects" option to radosgw-admin * Feature #55551: device ls-lights should include the host where the devices are * Feature #55576: [RFE] Add a rescan subcommand to the orch device command * Bug #55578: mgr/dashboard: Creating and editing Prometheus AlertManager silences is buggy * Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel" * Bug #55595: cephadm: prometheus: The generatorURL in alerts is only using hostname * Bug #55604: mgr/dashboard: form field validation icons overlap with other icons * Bug #55618: RGWRados::check_disk_state no checking object's storage_class * Bug #55619: rgw: input args poolid and epoch of fun RGWRados::Bucket::UpdateIndex::complete_del shold belong to index_pool * Bug #55620: ceph pacific fails to perform fs/multifs test * Bug #55638: alertmanager webhook urls may lead to 404 * Bug #55655: rgw: clean up linking targets to radosgw-admin * Bug #55670: osdmaptool is not mapping child pgs to the target OSDs * Bug #55673: mgr/cephadm: Deploying a cluster with the Vagrantfile fails * Bug #55710: cephfs-shell: exit code unset when command has missing argument * Feature #55715: pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds * Bug #55759: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes * Bug #55762: mgr/volumes: Handle internal metadata directories under '/volumes' properly. * Feature #55769: rgw: allow `radosgw-admin bucket stats` report more accurately * Feature #55777: Add server serial number information to cephadm gather-facts subcommand * Bug #55778: client: choose auth MDS for getxattr with the Xs caps * Bug #55807: qa failure: workload iogen failed * Feature #55821: pybind/mgr/volumes: interface to check the presence of subvolumegroups/subvolumes. * Bug #55822: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command * Bug #55824: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536 * Bug #55837: mgr/dashboard: After several days of not being used, Dashboard HTTPS website hangs during loading, with no errors * Bug #55851: Assert in Ceph messenger * Bug #55861: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metrics.TestMDSMetrics) * Bug #55904: RGWRados::check_disk_state no checking object's appendable attr * Bug #55905: Failed to build rados.cpython-310-x86_64-linux-gnu.so * Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure * Bug #55976: mgr/volumes: Clone operations are failing with Assertion Error * Bug #55980: mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5 * Bug #56000: task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls` * Bug #56003: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0) * Bug #56010: xfstests-dev generic/444 test failed * Bug #56012: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) * Bug #56024: cephadm: removes ceph.conf during qa run causing command failure * Feature #56058: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags * Bug #56063: Snapshot retention config lost after mgr restart * Bug #56116: mds: handle deferred client request core when mds reboot * Feature #56140: cephfs: tooling to identify inode (metadata) corruption * Bug #56162: mgr/stats: add fs_name as field in perf stats command output * Bug #56169: mgr/stats: 'perf stats' command shows incorrect output with non-existing mds_rank filter. * Feature #56178: [RFE] add a --force or --yes-i-really-mean-it to ceph orch upgrade * Feature #56179: [RFE] Our prometheus instance should scrape itself * Bug #56249: crash: int Client::_do_remount(bool): abort * Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self) * Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self) * Bug #56274: crash: pthread_mutex_lock() * Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_SCAN) * Bug #56382: ONode ref counting is broken * Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output * Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) * Bug #56480: std::shared_mutex deadlocks on Windows * Bug #56483: mgr/stats: missing clients in perf stats command output. * Bug #56488: BlueStore doesn't defer small writes for pre-pacific hdd osds * Bug #56529: ceph-fs crashes on getfattr * Bug #56536: cls_rgw: nonexists object shoud not be accounted when check_index * Bug #56537: cephfs-top: wrong/infinitely changing wsp values * Bug #56626: "ceph fs volume create" fails with error ERANGE * Bug #56632: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError * Bug #56666: mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon * Bug #56667: cephadm install fails: apt:stderr E: Unable to locate package cephadm * Bug #56671: zabbix module does not process some config options correctly * Bug #56672: 'ceph zabbix send' can block (mon) ceph commands and messages * Bug #56673: rgw: 'bucket check' deletes index of multipart meta when its pending_map is noempty * Bug #56694: qa: avoid blocking forever on hung umount * Bug #56696: admin keyring disappears during qa run * Bug #56698: client: FAILED ceph_assert(_size == 0) * Documentation #56730: doc: update snap-schedule notes regarding 'start' time * Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory() * Bug #56945: python: upgrade to 3.8 and/or 3.9 * Bug #56992: rgw_op.cc:Deleting a non-existent object also generates a delete marker * Bug #57005: mgr/dashboard: Cross site scripting in Angular <11.0.5 (CVE-2021-4231) * Bug #57014: cephfs-top: add an option to dump the computed values to stdout * Bug #57044: mds: add some debug logs for "crash during construction of internal request" * Bug #57072: Quincy 17.2.3 pybind/mgr/status: assert metadata failed * Bug #57084: Permissions of the .snap directory do not inherit ACLs * Feature #57090: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting * Feature #57091: mds: modify scrub to catch dentry corruption * Bug #57094: x-amz-date protocol change breaks aws v4 signature logic: was rfc 2616. Should now be iso 8601. * Bug #57126: client: abort the client daemons when we couldn't invalidate the dentry caches from kernel * Documentation #57127: doc: add debugging documentation * Bug #57138: mgr(snap-schedule): may TypeError in rm_schedule * Bug #57152: segfault in librados via libcephsqlite * Tasks #57172: Yield Context Threading * Bug #57204: MDLog.h: 99: FAILED ceph_assert(!segments.empty()) * Bug #57205: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) * Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS subvolume clone * Bug #57249: mds: damage table only stores one dentry per dirfrag * Bug #57280: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman * Fix #57295: qa: remove RHEL from job matrix * Bug #57299: qa: test_dump_loads fails with JSONDecodeError * Bug #57335: cephadm gather-facts reports disk size incorecctly for native 4k sectors * Bug #57449: qa: removal of host during QA * Feature #57459: mgr/dashboard: add support for creating realm/zonegroup/zone * Bug #57580: Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) * Bug #57586: first-damage.sh does not handle dentries with spaces * Bug #57589: cephfs-data-scan: scan_links is not verbose enough * Bug #57597: qa: data-scan/journal-tool do not output debugging in upstream testing * Bug #57598: qa: test_recovery_pool uses wrong recovery procedure * Bug #57620: mgr/volumes: addition of human-readable flag to volume info command * Bug #57657: mds: scrub locates mismatch between child accounted_rstats and self rstats * Documentation #57673: doc: document the relevance of mds_namespace mount option * Bug #57674: fuse mount crashes the standby MDSes * Bug #57677: qa: "1 MDSs behind on trimming (MDS_TRIM)" * Documentation #57737: Clarify security implications of path-restricted cephx capabilities * Bug #57764: Thread md_log_replay is hanged for ever. * Bug #57851: pybind/mgr/snap_schedule: use temp_store for db * Bug #57854: mds: make num_fwd and num_retry to __u32 * Bug #57881: LDAP invalid password resource leak fix * Bug #57912: mgr/dashboard: Dashboard creation of NFS exports with RGW backend fails: "selected realm is not the default" * Bug #57923: log: writes to stderr (pipe) may not be atomic * Bug #58000: mds: switch submit_mutex to fair mutex for MDLog * Bug #58008: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops * Bug #58028: cephfs-top: Sorting doesn't work when the filesystems are removed and created * Bug #58029: cephfs-data-scan: multiple data pools are not supported * Bug #58030: mds: avoid ~mdsdir's scrubbing and reporting damage health status * Bug #58034: RGW misplaces index entries after dynamically resharding bucket * Bug #58041: mds: src/mds/Server.cc: 3231: FAILED ceph_assert(straydn->get_name() == straydname) * Bug #58082: cephfs:filesystem became read only after Quincy upgrade * Bug #58095: snap-schedule: handle non-existent path gracefully during snapshot creation * Bug #58109: ceph-fuse: doesn't work properly when the version of libfuse is 3.1 or later * Bug #58128: FTBFS with fmtlib 9.1.0 * Feature #58133: qa: add test cases for fscrypt feature in kernel CephFS client * Feature #58150: Addhigh level host related information to the orch host ls command * Bug #58156: Monitors do not permit OSD to join after upgrading to Quincy * Bug #58219: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) [Command crashed: 'ceph-dencoder type JournalPointer import - decode dump_json'] * Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: * Bug #58221: pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) * Feature #58227: Expose additional OSD/PG related information to monitoring * Bug #58269: ceph mgr fail after upgrade to pacific * Bug #58286: Subsequent request fails after PutObject to non-existing bucket * Bug #58294: MDS: scan_stray_dir doesn't walk through all stray inode fragment * Bug #58330: RGW service crashes regularly with floating point exception * Bug #58353: cephadm/ingress: default haproxy image not using 'LTS' release. * Bug #58379: no active mgr after ~1 hour * Bug #58442: rgw-orphan-list tool can list all rados objects as orphans * Bug #58453: rgw-gap-list has insufficient error checking * Backport #58470: pacific: It is not possible to set empty tags on buckets and objects. * Bug #58489: mds stuck in 'up:replay' and crashed. * Feature #58565: rgw: add replication status header to s3 GetObj response * Bug #58572: Rook: Recover device inventory * Bug #58651: mgr/volumes: avoid returning ESHUTDOWN for cli commands * Bug #58670: segfault due to race condition between timeout handler calling close() and the call of async_shutdown() * Bug #58671: Frontend socket leak that leads to OOM when connections are reset * Bug #58678: cephfs_mirror: local and remote dir root modes are not same * Bug #58691: store names of modules that register RADOS clients in the MgrMap * Bug #58717: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write() * Bug #58744: qa: intermittent nfs test failures at nfs cluster creation * Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid' * Bug #58762: rgw/dbstore: teuthology reports a set of user policy failures/errors on main * Bug #58813: cephfs-top: Sort menu doesn't show 'No filesystem available' screen when all fs are removed * Bug #58814: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting * Bug #58838: mgr/dashboard: POD CPU usage is incorrect * Bug #58839: mgr/dashboard: rbd-mirror images status could be misleading * Bug #58840: mgr/dashboard: Misleading pagination in rbd-mirror images table * Bug #58841: mgr/dashboard: PG status in pools view difficult to read * Bug #58842: mgr/dashboard: Pools stats showing NAN * Bug #58843: mgr/dashboard: Daemon Memory Usage is incorrect * Bug #58853: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release * Bug #58884: ceph: osd blocklist does not accept v2/v1: prefix for addr * Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse * Backport #58978: reef: bucket mtime is zero * Backport #58981: reef: multisite reshard: old buckets with num_shards=0 get resharded to a new empty shard * Backport #59004: reef: test_rgw_datacache.py test if failing with Permission denied * Backport #59013: reef: rgwlc/notifications: initialize object state before accessing object size for use in notify * Backport #59028: reef: relying on boost flatmap emplace behavior is risky * Feature #59053: rgw: experimental support for restoring a lost bucket index * Backport #59055: reef: rgw: experimental support for restoring a lost bucket index * Backport #59072: reef: mgr/dashboard: fix prometheus endpoint issues in dashboard v3 * Cleanup #59078: rgw: install rgw scripts with common files rather than radosgw files * Backport #59094: reef: mgr/dashboard: Add the option to toggle the different views of the landing page mgr/dashboard: short_description * Bug #59120: qa: use parallel gzip for compressing logs * Feature #59122: rgw: add an unordered listing to the script to force stats update * Backport #59133: reef: DeleteObjects response does not include DeleteMarker/DeleteMarkerVersionId * Backport #59145: reef: rgw: request QUERY_STRING is duplicated into ops-log uri element * Backport #59151: reef: rgw: install rgw scripts with common files rather than radosgw files * Backport #59220: reef: rgw/verify suite should not pin centos * Backport #59222: reef: mds: catch damage to CDentry's first member before persisting * Backport #59232: reef: Support bucket notification with bucket policy * Backport #59273: reef: 'radosgw-admin data sync status' doesn't parse error repo entries * Backport #59275: reef: STS AssumeRoleWithWebIdentity improper url concatenation of ISS and well-known configuration path * Backport #59278: reef: Copying an object to itself crashes de RGW if executed as admin user. * Backport #59280: reef: multisite: after upgrade, bucket sync always restarts from full sync * Backport #59292: reef: rgw/upgrade: 'Failed to fetch package' for quincy in ubuntu 22.04 * Backport #59295: reef: MgrMonitor: batch commit OSDMap and MgrMap mutations * Backport #59323: reef: mgr/dashboard: mirror image replay progress empty * Backport #59351: reef: centos9: libcls_rgw.so has undefined libfmt symbol * Backport #59356: reef: sse: multipart uploads aren't using default encryption policy * Backport #59358: reef: Keystone EC2 auth does not support STREAMING-AWS4-HMAC-SHA256-PAYLOAD * Backport #59360: reef: metadata cache: if a watcher is disconnected and reinit() fails, it won't be retried again * Backport #59377: reef: rgw/s3 transfer encoding problems. * Backport #59402: reef: mgr/dashboard: In expand cluster create osd default selected as recommended not working * Bug #59427: rgw: `radosgw-admin bi list ...` fails with certain buckets * Backport #59493: reef: test_librgw_file.sh crashes: src/tcmalloc.cc:332] Attempt to free invalid pointer 0x55e8173eebd0 * Backport #59617: reef: RGW matching guest's stats against bucket owner's when checking quotas * Backport #59643: reef: rgw/cloud-transition: cloud-tiered objects are not synced in multisite environment * Bug #59656: pg_upmap_primary timeout * Backport #59673: reef: rgw/archive: Sync logs are not trimmed * Bug #59689: mgr/dashboard: SSO error: AttributeError: 'str' object has no attribute 'decode' * Backport #61156: reef: s3select fixes for trino interop * Bug #61198: rgw: multisite data log flag not used * Backport #61219: reef: valgrind: UninitCondition error in RGWHandler_REST::allocate_formatter() * Backport #61298: reef: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) * Bug #61333: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps * Backport #61373: reef: GetObj crashes when reading an object without a manifest * Backport #61376: reef: RGW crashes when replication rules are set using PutBucketReplication S3 API * Backport #61387: reef: bucket sync markers command returns error * Backport #61392: reef: rgw: Support disabling bucket replication using sync-policy * Feature #61405: rgw: allow rgw-restore-bucket-index to handle versioned buckets * Backport #61406: reef: rgw/multisite: Writing on a bucket with num_shards 0 causes sync issues * Backport #61454: reef: remove sal handles from RGWRados::Object ops * Bug #61588: radosgw-admin: System Attributes on Objects can cause object stat to dump invalid JSON