# v19.0.0 03/01/2024 Squid dev * Bug #24403: mon failed to return metadata for mds * Bug #43221: rgw: GET Bucket fails on renamed bucket on archive zone * Bug #43393: qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9 * Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) * Bug #44660: Multipart re-uploads cause orphan data * Bug #44916: client: syncfs flush is only fast with a single MDS * Bug #45736: rgw: lack of headers in 304 response * Bug #48673: High memory usage on standby replay MDS * Bug #48678: client: spins on tick interval * Bug #48750: ceph config set using osd/host mask not working * Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread * Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM * Bug #52280: Mds crash and fails with assert on prepare_new_inode * Documentation #52656: mgr/prometheus: wron unit in RBD latency metric description * Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations * Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear data_digest * Bug #53724: mds: stray directories are not purged when all past parents are clear * Bug #54182: OSD_TOO_MANY_REPAIRS cannot be cleared in >=Octopus * Bug #54557: scrub repair does not clear earlier damage health status * Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t&): assert(lock->get_state() == LOCK_LOCK || lock->get_state() == LOCK_MIX || lock->get_state() == LOCK_MIX_SYNC2) * Bug #55165: client: validate pool against pool ids as well as pool names * Bug #55606: [ERR] Unhandled exception from module ''devicehealth'' while running on mgr.y: unknown * Feature #55940: quota: accept values in human readable format as well * Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison * Bug #56034: qa/standalone/osd/divergent-priors.sh fails in test TEST_divergent_3() * Bug #56192: crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty()) * Bug #56239: crash: File "mgr/devicehealth/module.py", in get_recent_device_metrics: return self._get_device_metrics(devid, min_sample=min_sample) * Bug #56397: client: `df` will show incorrect disk size if the quota size is not aligned to 4MB * Bug #56577: mds: client request may complete without queueing next replay request * Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish) * Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay * Bug #57071: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients() * Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure * Bug #57154: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps * Bug #57206: ceph_test_libcephfs_reclaim crashes during test * Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 pending pAsLsXsFs issued pAsLsXsFs, sent 62.303702 seconds ago * Bug #57641: Ceph FS fscrypt clones missing fscrypt metadata * Bug #57655: qa: fs:mixed-clients kernel_untar_build failure * Bug #57985: mds: warning `clients failing to advance oldest client/flush tid` seen with some workloads * Fix #58023: mds: do not evict clients if OSDs are laggy * Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data * Feature #58129: mon/FSCommands: support swapping file systems by name * Feature #58154: mds: add minor segment boundaries * Bug #58195: mgr/snap_schedule: catch all exceptions to avoid crashing module * Bug #58228: mgr/nfs: disallow non-existent paths when creating export * Bug #58303: active mgr crashes with segfault when running 'ceph osd purge' * Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn(rename)) * Bug #58394: nofail option in fstab not supported * Bug #58411: mds: a few simple operations crash mds * Bug #58482: mds: catch damage to CDentry's first member before persisting * Feature #58550: mds: add perf counter to track (relatively) larger log events * Bug #58569: Add the ability to configure options for ceph-volume to pass to cryptsetup * Bug #58591: report "Insufficient space (<5GB)" even when disk size is sufficient * Bug #58619: mds: client evict [-h|--help] evicts ALL clients * Bug #58645: Unclear error when creating new subvolume when subvolumegroup has ceph.dir.subvolume attribute set to 1 * Bug #58677: cephfs-top: test the current python version is supported * Feature #58680: libcephfs: clear the suid/sgid for fallocate * Bug #58812: ceph-volume prepare doesn't use partitions as-is anymore * Bug #58832: ceph-mgr package installation fails on centos 9 * Feature #58877: mgr/volumes: regenerate subvolume metadata for possibly broken subvolumes * Bug #58924: mgr: block register_client on new MgrMap * Bug #58943: ceph-volume's deactivate doesn't close encrypted volumes * Cleanup #58961: mgr/dashboard: remove old dashboard (dashboard v3) * Bug #58971: mon/MDSMonitor: do not trigger propose on error from prepare_update * Bug #58972: mon/OSDMonitor: do not propose on error in prepare_update * Cleanup #58973: mgr/dashboard: RGW 404 shouldn't trigger log exceptions * Bug #58974: mon/MonmapMonitor: do not propose on error in prepare_update * Bug #58975: mon: do not erroneously propose on error in ::prepare_update * Bug #59042: mon/AuthMonitor: do not erroneously propose on error in ::prepare_update * Bug #59043: mon/ConfigMonitor: do not erroneously propose on error in ::prepare_update * Bug #59044: mon/HealthMonitor: do not erroneously propose on error in ::prepare_update * Bug #59045: mon/KVMonitor: do not erroneously propose on error in ::prepare_update * Bug #59046: mon/LogMonitor: do not erroneously propose on error in ::prepare_update * Bug #59047: mon/MgrStatMonitor: do not erroneously propose on error in ::prepare_update * Bug #59067: mds: add cap acquisition throttled event to MDR * Bug #59107: MDS imported_inodes metric is not updated. * Bug #59134: mds: deadlock during unlink with multimds (postgres) * Bug #59183: cephfs-data-scan: does not scan_links for lost+found * Bug #59185: MDSMonitor: should batch propose osdmap/mdsmap changes via some fs commands * Bug #59188: cephfs-top: cephfs-top -d not working as expected * Bug #59192: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) * Bug #59230: Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage) * Bug #59297: qa: test_join_fs_unset failure * Bug #59314: mon/MDSMonitor: plug PAXOS when evicting an MDS * Bug #59318: mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beacon twice * Feature #59328: mgr/dashboard: add support for editing RGW zone * Bug #59332: qa: test_rebuild_simple checks status on wrong file system * Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed * Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" * Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR * Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or gids * Bug #59413: cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" * Bug #59425: qa: RuntimeError: more than one file system available * Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working * Bug #59514: client: read wild pointer when reconnect to mds * Bug #59527: qa: run scrub post disaster recovery procedure * Bug #59529: cluster upgrade stuck with OSDs and MDSs not upgraded. * Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x' * Bug #59552: mon: block osd pool mksnap for fs pools * Bug #59553: cephfs-top: fix help text for delay * Bug #59569: mds: allow entries to be removed from lost+found directory * Bug #59580: memory leak (RESTful module, maybe others?) * Bug #59582: snap-schedule: allow retention spec to specify max number of snaps to retain * Bug #59624: pybind/ceph_argparse: Error message is not descriptive for ceph tell command * Bug #59657: qa: test with postgres failed (deadlock between link and migrate straydn(rename)) * Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect * Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file or man page * Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests * Bug #59684: Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) * Bug #59691: mon/MDSMonitor: may lookup non-existent fs in current MDSMap * Bug #59705: client: only wait for write MDS OPs when unmounting * Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available * Bug #59716: tools/cephfs/first-damage: unicode decode errors break iteration * Feature #59727: The libradosstriper interface provides an optional parameter to avoid shared lock when reading data * Bug #59735: fs/ceph: cross check passed in fsid during mount with cluster fsid * Bug #59813: crash: void PaxosService::propose_pending(): assert(have_pending) * Bug #61009: crash: void interval_set::erase(T, T, std::function) [with T = inodeno_t; C = std::map]: assert(p->first <= start) * Bug #61148: dbench test results in call trace in dmesg * Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finishes timesout. * Bug #61186: mgr/nfs: hitting incomplete command returns same suggestion twice * Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific * Bug #61332: dbench test results in call trace in dmesg * Bug #61369: [reef] RGW crashes when replication rules are set using PutBucketReplication S3 API * Fix #61378: mds: turn off MDS balancer by default * Bug #61399: qa: build failure for ior * Bug #61400: valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc * Bug #61409: qa: _test_stale_caps does not wait for file flush before stat * Feature #61417: Zoned Block Devices (ZNS) support * Bug #61444: mds: session ls command appears twice in command listing * Bug #61459: mds: session in the importing state cannot be cleared if an export subtree task is interrupted while the state of importer is acking * Bug #61466: Add bluefs write op count metrics * Bug #61523: client: do not send metrics until the MDS rank is ready * Bug #61572: mgr: remove invalid zero performance counter * Bug #61574: qa: build failure for mdtest project * Feature #61595: Consider setting "bulk" autoscale pool flag when automatically creating a data pool for CephFS * Feature #61599: mon/MDSMonitor: optionally forbid to use standby for another fs as last resort * Bug #61627: Mds crash and fails with assert on prepare_new_inode * Bug #61629: rgw: add support http_date if http_x_amz_date is missing for sigv4 * Bug #61666: cephfs: print better error message when MDS caps perms are not right * Bug #61690: mgr/dashboard: install_deps.sh fails on vanilla CentOS 8 Stream * Bug #61749: mds/MDSRank: op_tracker of mds have slow op alway. * Bug #61753: Better help message for cephfs-journal-tool -help command for --rank option. * Bug #61764: qa: test_join_fs_vanilla is racy * Bug #61775: cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests) * Bug #61782: mds: cap revoke and cap update's seqs mismatched * Bug #61837: qa: test_progress fails with osds full (?) * Bug #61844: mgr/dashboard: dashboard thread abort * Bug #61864: mds: replay thread does not update some essential perf counters * Documentation #61865: add doc on how to expedite MDS recovery with a lot of log segments * Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TRIM or MDS_HEALTH_CACHE_OVERSIZED health warnings * Bug #61867: mgr/volumes: async threads should periodically check for work * Bug #61869: pybind/cephfs: holds GIL during rmdir * Bug #61874: mgr: DaemonServer::ms_handle_authentication acquires daemon locks * Bug #61879: mds: linkmerge assert check is incorrect in rename codepath * Bug #61897: qa: rados:mgr fails with MDS_CLIENTS_LAGGY * Feature #61908: mds: provide configuration for trim rate of the journal * Bug #61912: mgr hang when purging OSDs * Bug #61942: The throttle parameter of osd does not take effect for mgr * Bug #61947: mds: enforce a limit on the size of a session in the sessionmap * Bug #61950: mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_scrub_large_omap_object_key_threshold * Bug #61957: test_client_limits.TestClientLimits.test_client_release_bug fails * Bug #61958: mds: add debug logs for handling setxattr for ceph.dir.subvolume * Bug #61967: mds: "SimpleLock.h: 417: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())" * Bug #61972: cephfs/tools: cephfs-data-scan "cleanup" operation is not parallelised * Bug #62002: raw list shouldn't list lvm OSDs * Bug #62013: Object with null version when using versioning and transition * Bug #62021: mds: unnecessary second lock on snaplock * Bug #62036: src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty()) * Bug #62047: delete image fail on dashboard while it has some protected snapshots * Bug #62052: mds: deadlock when getattr changes inode lockset * Bug #62057: mds: add TrackedOp event for batching getattr/lookup * Bug #62058: mds: inode snaplock only acquired for open in create codepath * Bug #62075: New radosgw-admin commands to cleanup leftover OLH index entries and unlinked instance objects * Bug #62076: reef: Test failure: test_grow_shrink (tasks.cephfs.test_failover.TestMultiFilesystems) * Bug #62077: mgr/nfs: validate path when modifying cephfs export * Bug #62081: tasks/fscrypt-common does not finish, timesout * Bug #62084: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' * Feature #62086: mds: print locks when dumping ops * Bug #62096: mds: infinite rename recursion on itself * Bug #62114: mds: adjust cap acquistion throttle defaults * Bug #62146: qa: adjust fs:upgrade to use centos_8 yaml * Bug #62160: mds: MDS abort because newly corrupt dentry to be committed * Bug #62164: qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committed: [dentry #0x1/a [fffffffffffffff6,head] auth (dversion lock) v=13..." * Bug #62187: iozone: command not found * Bug #62203: gcc-+12: FTBFS random_shuffle is deprecated: use 'std::shuffle' instead * Bug #62208: mds: use MDSRank::abort to ceph_abort so necessary sync is done * Bug #62217: ceph_fs.h: add separate owner_{u,g}id fields * Bug #62236: qa: run nfs related tests with fs suite * Bug #62250: retry metadata cache notifications with INVALIDATE_OBJ * Bug #62265: cephfs-mirror: use monotonic clocks in cephfs mirror daemon * Bug #62277: Error: Unable to find a match: python2 with fscrypt tests * Bug #62278: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output * Bug #62320: lvm list should filter also on vg name * Bug #62326: pybind/mgr/cephadm: stop disabling fsmap sanity checks during upgrade * Bug #62355: cephfs-mirror: do not run concurrent C_RestartMirroring context * Bug #62357: tools/cephfs_mirror: only perform actions if init succeed * Bug #62382: mon/MonClient: ms_handle_fast_authentication return value ignored * Bug #62385: mgr/status: wrong kb_used and kb_avail units in command `ceph osd status` results * Bug #62401: ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fails from l_bluefs_slow_used_bytes not matching the expected value * Bug #62428: cmake: rebuild picks up newer python when originally built with older one * Feature #62474: rgw: add versioning status during `radosgw-admin bucket stats` * Bug #62482: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" * Bug #62492: libcephsqlite: short reads fill 0s at beginning of buffer * Bug #62494: Lack of consistency in time format * Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snapshot_rm.sh) * Bug #62508: qa: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log * Bug #62510: snaptest-git-ceph.sh failure with fs/thrash * Bug #62537: cephfs scrub command will crash the standby-replay MDSs * Bug #62567: postgres workunit times out - MDS_SLOW_REQUEST in logs * Bug #62577: mds: log a message when exiting due to asok "exit" command * Bug #62578: mon: osd pg-upmap-items command causes PG_DEGRADED warnings * Bug #62579: client: evicted warning because client completes unmount before thrashed MDS comes back * Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) * Bug #62626: mgr/nfs: include pseudo in JSON output when nfs export apply -i fails * Bug #62627: global: core fatal signal handler uses may signal-unsafe functions * Bug #62641: mgr/(object_format && nfs/export): enhance nfs export update failure response * Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds * Bug #62659: mgr/nfs: report actual errno instead of EIO for single export update failure * Bug #62663: MDS: inode nlink value is -1 causing MDS to continuously crash * Bug #62681: high virtual memory consumption when dealing with Chunked Upload * Bug #62682: mon: no mdsmap broadcast after "fs set joinable" is set to true * Bug #62700: postgres workunit failed with "PQputline failed" * Bug #62702: MDS slow requests for the internal 'rename' requests * Bug #62737: RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage * Bug #62739: cephfs-shell: remove distutils Version classes because they're deprecated * Bug #62763: qa: use stdin-killer for ceph-fuse mounts * Bug #62778: cmake: BuildFIO.cmake should not introduce -std=gnu++17 * Bug #62793: client: setfattr -x ceph.dir.pin: No such attribute * Bug #62832: common: config_proxy deadlock during shutdown (and possibly other times) * Bug #62848: qa: fail_fs upgrade scenario hanging * Bug #62861: mds: _submit_entry ELid(0) crashed the MDS * Bug #62863: Slowness or deadlock in ceph-fuse causes teuthology job to hang and fail * Bug #62870: test_nfs task fails due to no orch backend set * Bug #62873: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) * Bug #62875: SignatureDoesNotMatch when extra headers start with 'x-amzn' * Feature #62882: mds: create an admin socket command for raising a signal * Feature #62884: audit: create audit module which persists in RADOS important operations performed on the cluster * Feature #62892: mgr/snap_schedule: restore scheduling for subvols and groups * Bug #62925: cephfs-journal-tool: Add preventive measures in the tool to avoid corruting a ceph file system * Bug #62936: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) * Bug #62938: RGW s3website API prefetches data for range requests * Bug #62953: qa: fs:upgrade needs updated to upgrade only from N-2, N-1 releases (i.e. reef/quincy) * Bug #62962: mds: standby-replay daemon crashes on replay * Bug #62968: mgr/volumes: fix `subvolume group rm` command error message * Bug #62979: client: queue a delay cap flushing if there are ditry caps/snapcaps * Bug #63004: CVE-2023-43040 - Improperly verified POST keys. * Bug #63088: mgr/dashboard: Graphs in Grafana Dashboard are not showing consistent line graphs after upgrading from RHCS 4 to 5. * Bug #63093: mds: `dump dir` command should indicate that a dir is not cached * Bug #63103: mds: disable delegating inode ranges to clients * Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning * Bug #63141: qa/cephfs: test_idem_unaffected_root_squash fails * Bug #63154: fs rename must require FS to be offline and refuse_client_session to be set * Bug #63166: mon/MDSMonitor: metadata not loaded from PAXOS on update * Bug #63170: rgw: indexless buckets can crash radosgw during sync threads * Bug #63176: qa: make rank_asok() capable of handling errors from asok commands * Bug #63188: client: crash during upgrade from octopus to quincy (or from pacific to reef) * Bug #63195: mgr: remove out&down osd from mgr daemons to avoid warnings * Bug #63196: compilation fails from git main on ppc64le with missing symbols * Bug #63218: cmake: dependency ordering error for liburing and librocksdb * Feature #63224: [RFE] Add an alert for swap space usage * Bug #63245: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv * Bug #63259: mds: failed to store backtrace and force file system read-only * Bug #63281: src/mds/MDLog.h: 100: FAILED ceph_assert(!segments.empty()) * Bug #63287: mgr/dashboard: Unable to set max objects under user quota for an user * Bug #63288: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid * Cleanup #63294: mgr: enable per-subinterpreter GIL (Python >= 3.12) * Bug #63301: cephfs-data-scan: may remove newest primary link of inode * Feature #63344: Set and manage nvmeof gw - controller ids ranges * Bug #63353: resharding RocksDB after upgrade to Pacific breaks OSDs * Documentation #63354: Add all mon_cluster_log_* configs to docs * Bug #63360: rgw: rgw-restore-bucket-index does not restore some versioned instance entries * Bug #63364: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid * Bug #63388: mgr: discovery service (port 8765) fails if ms_bind ipv6 only * Bug #63411: qa: flush journal may cause timeouts of `scrub status` * Bug #63425: tasks.cephadm: ceph.log No such file or directory * Bug #63433: devicehealth: sqlite3.IntegrityError: UNIQUE constraint failed: DeviceHealthMetrics.time, DeviceHealthMetrics.devid * Bug #63436: Typo in reshard example * Feature #63468: mds/purgequeue: add l_pq_executed_ops counter * Bug #63469: mgr/dashboard: fix rgw multi-site import form helper * Bug #63482: qa: fs/nfs suite needs debug mds/client * Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined" * Bug #63514: mds: avoid sending inode/stray counters as part of health warning for standby-replay * Bug #63516: mds may try new batch head that is killed * Bug #63520: the usage of osd_pg_stat_report_interval_max is not uniform * Bug #63538: mds: src/mds/Locker.cc: 2357: FAILED ceph_assert(!cap->is_new()) * Bug #63541: Observing client.admin crash in thread_name 'rados' on executing 'rados clearomap..' * Bug #63557: NVMe-oF gateway prometheus endpoints * Bug #63561: cephadm: build time install of dependencies fails on build systems that disable network * Feature #63574: support setting quota in the format of {K|M}iB along with the K|M, {K|M}i * Bug #63587: Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.cephfs.test_misc.TestMisc) * Bug #63608: mgr/dashboard: cephfs rename only works when fs is offline * Bug #63609: osd acquire map_cache_lock high latency * Bug #63614: cephfs-mirror: the peer list/snapshot mirror status always displays only one mon host instead of all * Bug #63615: mgr: consider raising priority of MMgrBeacon * Bug #63619: client: check for negative value of iovcnt before passing it to internal functions during async I/O * Bug #63629: client: handle context completion during async I/O call when the client is not mounting * Bug #63632: client: fh obtained using O_PATH can stall the caller during async I/O * Bug #63633: client: fix copying bufferlist to iovec structures in Client::_read * Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock * Bug #63648: client: ensure callback is finished if write fails during async I/O * Bug #63658: OSD trim_maps - possible too slow lead to using too much storage space * Feature #63667: client,libcephfs,cephfs.pyx: add quiesce API * Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64 * Bug #63679: client: handle zero byte sync/async write cases * Bug #63680: qa/cephfs: improvements for name generators in test_volumes.py * Bug #63685: mds: FAILED ceph_assert(_head.empty()) * Bug #63699: qa: failed cephfs-shell test_reading_conf * Bug #63710: client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 pending pAsLsXs issued pAsLsXsFs, sent 30723.964282 seconds ago * Bug #63713: mds: encode `bal_rank_mask` with a higher version * Bug #63722: cephfs/fuse: renameat2 with flags has wrong semantics * Bug #63727: LogClient: do not output meaningless logs by default * Bug #63734: client: handle callback when async io fails * Bug #63740: rgwlc: lock_lambda overwrites ret val * Bug #63765: change priority of mds rss perf counter to useful * Bug #63836: OSD:oldest_map/newest_map should be displayed directly in the ceph daemon osd.x status command * Bug #63882: pybind/mgr/devicehealth: "rados.ObjectNotFound: [errno 2] RADOS object not found (Failed to operate read op for oid $dev" * Bug #63907: cephfs-mirror: Mirror::update_fs_mirrors crashes while taking lock * Feature #63945: cephfs_mirror: add perf counters (w/ label) support * Bug #63952: Boost download URL broken, affecting Windows build * Documentation #63991: cephfs-top: document all metric fields * Bug #64012: qa: Command failed qa/workunits/fs/full/subvolume_clone.sh * Bug #64020: cephadm is not accounting for the memory required nvme gateways are used * Bug #64042: mgr/snap_schedule: Adding retention which already exists gives improper error message * Bug #64058: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh) * Bug #64061: mds: check the layout in Server::handle_client_mknod * Bug #64085: qa: stop testing on rhel8 * Bug #64095: ceph-exporter is not included in the deb packages * Bug #64113: ceph fails to build with Python 3.13: error: there are no arguments to ‘PyEval_CallMethod’ * Bug #64124: diff users in the tenant, create the same name topic, rgw topic is covered * Bug #64127: mds: passing multiple caps to "fs authorize" cmd causes MON to crash * Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize) * Bug #64174: fs/cephadm/renamevolume: volume rename failure * Bug #64179: client: check for caps issued before incrementing cap ref * Bug #64191: neorados: compiler warnings in main * Bug #64209: snaptest-multiple-capsnaps.sh fails with "got remote process result: 1" * Bug #64236: mon: health store size growing infinitely * Bug #64290: mds: erroneous "MDS abort because newly corrupt dentry to be committed" because snapclient is not yet synced with snapserver * Bug #64313: client: do not proceed with I/O if filehandle is invalid * Bug #64319: OSD does not move itself to crush_location on start, root=default is not applied * Feature #64322: rgw: enhance radoslist to allow object & object version to be specified * Feature #64334: The nvmeof gateway has an embedded prometheus exporter than should be scraped by Ceph's Prometheus instance * Feature #64335: Add alerts to ceph monitoring stack for the nvmeof gateways * Feature #64387: mds: add per-client perf counters (w/ label) support * Bug #64440: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mask v18.2.1 <-> main * Bug #64456: Missing entries for hardware alerts from the MIB file * Bug #64479: Memory leak detected when accessing a CephFS volume from Samba using libcephfs * Bug #64482: ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented * Backport #64501: squid: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1 * Bug #64548: ceph-base: /var/lib/ceph/crash/posted not chowned to ceph:ceph causing ceph-crash to fail * Feature #64578: Add a top tool to the nvmeof CLI to support troubleshooting * Backport #64601: squid: unittest_rgw_dmclock_scheduler fails for arm64 * Backport #64663: squid: crimson: unittest-seatar-socket failing intermittently * Bug #64719: SSL session id reuse speedup mechanism of the SSL_CTX_set_session_id_context is not working