# v17.0.0 Quincy * Feature #1276: client: expose mds partition via virtual xattrs * Bug #19242: Ownership of /var/run/ceph not set with sysv-init under Jewel * Bug #24744: rgw: index wrongly deleted when put raced with list * Bug #25070: lvm activate --all uses systemctl although --no-systemd option is set * Bug #36453: mgr/dashboard: Some REST endpoints grow linearly with OSD count * Feature #39478: mgr/dashboard: new RGW workflows & RGW enhancements * Feature #40609: libcephsqlite: library for sqlite interface to ceph * Bug #41327: mds: dirty rstat lost during scatter-gather process * Bug #42516: mds: some mutations have initiated (TrackedOp) set to 0 * Bug #43216: MDSMonitor: removes MDS coming out of quorum election * Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclusive file lock case) * Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery) * Bug #44988: client: track dirty inodes in a per-session list for effective cap flushing * Bug #45145: qa/test_full: failed to open 'large_file_a': No space left on device * Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID * Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed * Bug #46075: ceph-fuse: mount -a on already mounted folder should be ignored * Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery * Feature #46280: mgr/dashboard: Enable animations on navigation.component.html * Feature #46493: mgr/dashboard: integrate Dashboard with mgr/nfs module interface * Bug #46558: cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec * Feature #46865: client: add metric for number of pinned capabilities * Feature #46866: kceph: add metric for number of pinned capabilities * Bug #46902: mds: CInode::maybe_export_pin is broken * Bug #47172: mgr/nfs: Add support for RGW export * Bug #47276: MDSMonitor: add command to rename file systems * Cleanup #47355: mgr/dashboard: create directive for AuthStorage service * Feature #47490: Integration of dashboard with volume/nfs module * Bug #47537: Prometheus rbd metrics absent by default * Feature #47711: mgr/cephadm: add a feature to examine the host facts to look for configuration/compliance problems * Feature #47718: intoduce means to detect/workaround spurios read errors in bluefs * Bug #47843: mds: stuck in resolve when restarting MDS and reducing max_mds * Fix #47931: Directory quota optimization * Bug #47979: qa: test_ephemeral_pin_distribution failure * Cleanup #48005: mgr/dashboard: fix frontend deps' vulnerabilities * Documentation #48017: snap-schedule doc * Fix #48027: qa: add cephadm tests for CephFS in QA * Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure * Bug #48231: qa: test_subvolume_clone_in_progress_snapshot_rm is racy * Bug #48365: qa: ffsb build failure on CentOS 8.2 * Feature #48404: client: add a ceph.caps vxattr * Bug #48411: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all failed to reach desired subtree state * Bug #48422: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_nodeid())) * Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client session 4564 (v1:172.21.15.47:0/603539598)" * Bug #48473: fs perf stats command crashes * Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation' * Bug #48640: qa: snapshot mismatch during mds thrashing * Bug #48679: client: items pinned in cache preventing unmount * Feature #48682: MDSMonitor: add command to print fs flags * Fix #48683: mds/MDSMap: print each flag value in MDSMap::dump * Bug #48700: client: Client::rmdir() may fail to remove a snapshot * Bug #48716: aws-s3 incompatibility related metadata * Bug #48722: There is a bug in "GetBucketLocation" API when "Bucket" does not exist. * Feature #48736: qa: enable debug loglevel kclient test suits * Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) * Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient) * Feature #48791: mds: support file block size * Fix #48802: mds: define CephFS errors that replace standard errno values * Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" * Bug #48830: pacific: qa: :ERROR: test_idempotency * Bug #48854: mistaken deletion by non S3 api * Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings * Bug #48912: ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out inappropriately when stat fails * Documentation #48914: mgr/nfs: Update about user config * Feature #48943: cephfs-mirror: display cephfs mirror instances in `ceph status` command * Bug #48947: cephadm: fix rgw osd cap tag * Bug #48973: mgr/dashboard: dashboard hangs when accessing it * Feature #48991: client: allow looking up snapped inodes by inode number+snapid tuple * Bug #49020: rados subcommand rmomapkey does not report error when key provided not found * Feature #49040: cephfs-mirror: test mirror daemon with valgrind * Feature #49049: mgr/prometheus: Update ceph_pool_* metrics to include additional labels * Feature #49063: rgw: tooling to locate rgw objects with missing rados components * Bug #49074: mds: don't start purging inodes in the middle of recovery * Bug #49088: cannot set --id/--name (and others) arguments via environment variable CEPH_ARGS * Bug #49102: Windows RBD service issues * Bug #49121: vstart: volumes/nfs interface complaints cluster does not exists * Bug #49122: vstart: Rados url error * Bug #49123: mgr/dashboard: Error updating cephfs exports * Bug #49126: rook: 'ceph orch ls' throws type error * Feature #49127: rook: Add support for service restart * Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE", * Bug #49133: mgr/nfs: Rook does not support restart of services, handle the NotImplementedError exception raised * Feature #49184: rgw: allow rgw-orphan-list to handle intermediate files w/ binary data * Fix #49188: mds: speed up the process of looking up a single dentry ,when the key is not in mds cache * Bug #49200: mgr/dashboard: browser freezes when tryin to execute /api/cluster_conf from openAPI docs * Cleanup #49216: mgr/dashboard: delete EOF when reading passwords from file * Cleanup #49236: mgr/dashboard: avoid data processing in crush-map component * Bug #49240: terminate called after throwing an instance of 'std::bad_alloc' * Cleanup #49243: mgr/dashboard: set XFrame options and Content Security Policy headers * Bug #49255: src/mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) * Feature #49262: mgr/dashboard: provide the service events when showing a service in the UI * Feature #49283: mgr/dashboard: report fsid in cluster configuration * Bug #49286: fix setting selinux context on file with r/o permissions * Cleanup #49291: mgr/dashboard: fix MTU Mismatch alert * Bug #49292: mgr/dashboard: fix PUT - /api/host/{hostname} while adding labels * Bug #49301: mon/MonCap: `fs authorize` generates unparseable cap for file system name containing '-' * Bug #49307: nautilus: qa: "RuntimeError: expected fetching path of an pending clone to fail" * Bug #49308: nautilus: qa: "AssertionError: expected removing source snapshot of a clone to fail" * Bug #49309: nautilus: qa: "Assertion `cb_done' failed." * Feature #49312: mgr/dashboard: screen capture API to share/export grafana dashboards as images * Bug #49318: qa: racy session evicted check * Bug #49329: Minor Windows issues * Bug #49331: mgr/dashboard: E2E Failure: Pools page Create, update and destroy should edit a pools placement group: "Timed out retrying: Expected to find content: '32 active+clean'" * Backport #49337: octopus: orchestrator/01-hosts.e2e-spec.ts failed in test_dashboard_e2e.sh * Backport #49338: pacific: orchestrator/01-hosts.e2e-spec.ts failed in test_dashboard_e2e.sh * Fix #49341: qa: add async dirops testing * Bug #49342: mgr/dashboard: alert notification shows 'undefined' instead of alert message * Bug #49344: mgr/dashboard: 'Test failure: test_pwd_expiration_date_update (tasks.mgr.dashboard.test_user.UserTest)' * Bug #49349: mgr/telemetry: check if 'ident' channel is active before compiling reports * Bug #49354: mgr/dashboard: Device health status is not getting listed under hosts section * Bug #49355: rbd_support: should bail out if snapshot mirroring is not enabled * Bug #49359: osd: warning: unused variable * Cleanup #49363: mgr/dashboard: remove settins.py module and refactor MODULE_OPTIONS * Bug #49365: octopus: qa: "Cannot write to 'pjd-fstest-20090130-RC-aclfixes.tgz' (Invalid argument)." * Documentation #49372: doc: broken links multimds and kcephfs * Bug #49379: client: wake up the front pos waiter * Bug #49391: qa: run fs:verify with tcmalloc * Bug #49395: ceph-test rpm missing gtest dependencies * Bug #49396: qa: "nothing provides libneoradostest-support.so()(64bit) needed by ceph-test-2:17.0.0-946.g1d4dd247.el8.x86_64" * Feature #49407: Enable the ability of cephadm to trigger libstoragemgmt info from ceph-volume inventory * Bug #49419: cephfs-mirror: dangling pointer in PeerReplayer * Bug #49424: octopus: "1 pools have both target_size_bytes and target_size_ratio set" in clog * Bug #49443: mgr/prometheus doesn't expose used_bytes per pool * Bug #49458: qa: switch fs:upgrade from nautilus to octopus * Bug #49459: pybind/cephfs: DT_REG and DT_LNK values are wrong * Bug #49464: qa: rank_freeze prevents failover on some tests * Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" * Bug #49468: rados: "Command crashed: 'rados -p cephfs_metadata rmxattr 10000000000.00000000 parent'" * Bug #49469: qa: "AssertionError: expected removing source snapshot of a clone to fail" * Bug #49476: DaemonServer.cc: 2827: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) * Bug #49491: mgr/dashboard: Spanish translation 'Scrub' doesn't seem to be correct * Bug #49498: qa: "TypeError: update_attrs() got an unexpected keyword argument 'createfs'" * Bug #49500: qa: "Assertion `cb_done' failed." * Bug #49507: qa: mds removed because trimming for too long with valgrind * Bug #49510: qa: file system deletion not complete because starter fs already destroyed * Bug #49511: qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'" * Bug #49536: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0) * Bug #49541: rgw: object lock: improve client error messages * Bug #49559: libcephfs: test termination "what(): Too many open files" * Bug #49574: mgr/dashboard: ERROR: test_a_set_login_credentials (tasks.mgr.dashboard.test_auth.AuthTest) * Bug #49576: mgr/balancer: KeyError messages in balancer module * Bug #49580: mgr/dashboard: replace 'telemetry_notification_hidden' localStore to 1-year expiring cookie * Cleanup #49586: mgr/dashboard: perform matrix distro-release testing for tox tests * Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread * Cleanup #49606: mgr/dashboard: improve telemetry opt-in reminder notification message * Bug #49607: qa: slow metadata ops during scrubbing * Bug #49617: mds: race of fetching large dirfrag * Feature #49619: cephfs-mirror: add mirror peers via bootstrapping * Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) * Bug #49622: cephad orchestrator allows to delete hosts with ceph daemons running * Feature #49623: Windows CephFS support - ceph-dokan * Bug #49627: mgr: fix dump duplicate info for ceph osd df * Bug #49645: mgr/dashboard: Remove username, password fields from -Cluster/Manager Modules/dashboard,influx * Bug #49655: mgr/dashboard: error notification shown when no rgw daemons running. * Bug #49662: ceph-dokan improvements for additional mounts * Bug #49684: qa: fs:cephadm mount does not wait for mds to be created * Bug #49693: Manager daemon is unresponsive, replacing it with standby daemon * Feature #49709: mgr/dashboard: manage RGW multisite sync policy * Bug #49711: cephfs-mirror: symbolic links do not get synchronized at times * Bug #49719: mon/MDSMonitor: standby-replay daemons should be removed when the flag is turned off * Bug #49720: mon/MDSMonitor: do not pointlessly kill standbys that are incompatible with current CompatSet * Bug #49725: client: crashed in cct->_conf.get_val() in Client::start_tick_thread() * Bug #49736: cephfs-top: missing keys in the client_metadata * Documentation #49763: doc: Document mds cap acquisition readdir throttle * Documentation #49801: Reorganize Windows documentation * Bug #49803: mgr/dashboard: validate/fix behaviour of JWT cookie after expiration * Feature #49811: mds: collect I/O sizes from client for cephfs-top * Bug #49822: test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure * Cleanup #49829: mgr/dashboard: improve 'cluster-manager' role description * Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist * Bug #49837: mgr/pybind/snap_schedule: do not fail when no fs snapshots are available * Bug #49842: qa: stuck pkg install * Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure * Bug #49845: qa: failed umount in test_volumes * Bug #49869: mgr/dashboard: feature toggles CLI is broken * Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific * Bug #49882: mgr/volumes: setuid and setgid file bits are not retained after a subvolume snapshot restore * Bug #49883: librados: hang in RadosClient::wait_for_osdmap * Bug #49885: mgr/dashboard: Bucket creation fails when selecting locking with certain values * Bug #49892: rgw_orphan_list.sh causing a crash in the OSD * Bug #49897: mgr/dashboard: Unable to login to ceph dashboard until clearing cookies * Bug #49898: qa: daemonwatchdog fails if mounts not defined * Bug #49899: ubuntu ceph-mgr package does not pickup libsqlite3-mod-ceph automatically * Fix #49901: SimpleRADOSStriper: limit parallelism of deletes * Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, mv complains "are the same file" * Feature #49913: kclient: collect I/O sizes from client for cephfs-top * Documentation #49921: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limitation * Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched * Bug #49925: mgr/dashboard: adapt Dashboard to work with NFSv4 * Bug #49928: client: items pinned in cache preventing unmount x2 * Bug #49936: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024) * Bug #49939: cephfs-mirror: be resilient to recreated snapshot during synchronization * Feature #49942: cephfs-mirror: enable running in HA * Cleanup #49943: mgr/dashboard: mute i18n output * Feature #49946: mgr/dashboard: harden API testing for mgr API * Bug #49948: cephsqlite: xCurrentTimeInt64 binding uses incorrect julian day offset units * Bug #49954: cephadm is not persisting the grafana.db file, so any local customizations are lost * Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode * Bug #49974: cephfs-top: fails with exception "OPENED_FILES" * Bug #50005: cephfs-top: flake8 E501 line too long error * Bug #50007: Nothing provides sqlite-libs needed by libcephsqlite * Documentation #50008: mgr/nfs: Add troubleshooting section * Bug #50010: qa/cephfs: get_key_from_keyfile() return None when key is not found in keyfile * Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)" * Bug #50021: qa: snaptest-git-ceph failure during mon thrashing * Bug #50033: mgr/stats: be resilient to offline MDS rank-0 * Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts * Fix #50045: qa: test standby_replay in workloads * Bug #50048: mds: standby-replay only trims cache when it reaches the end of the replay log * Bug #50057: client: openned inodes counter is inconsistent * Bug #50060: client: access(path, X_OK) on non-executable file as root always succeeds * Cleanup #50080: mgr/nfs: move nfs code out of volumes plugin * Bug #50090: client: only check pool permissions for regular files * Bug #50091: cephfs-top: exception: addwstr() returned ERR * Bug #50109: ceph-volume can't deactivate all * Bug #50112: MDS stuck at stopping when reducing max_mds * Cleanup #50149: client: always register callbacks before mount() * Documentation #50161: mgr/nfs: validation error on creating custom export * Bug #50174: mgr/dashboard: Read-only user can see registry password * Bug #50177: osd: "stalled aio... buggy kernel or bad device?" * Bug #50178: qa: "TypeError: run() got an unexpected keyword argument 'shell'" * Bug #50194: librgw: make rgw file handle versioned * Bug #50215: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'" * Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory" * Bug #50224: qa: test_mirroring_init_failure_with_recovery failure * Feature #50226: mgr/dashboard: Update the embedded grafana dashboard to show compressions stats by pool * Documentation #50229: cephfs-mirror: update docs with `fs snapshot mirror daemon status` interface * Feature #50235: allow cephfs-shell to mount named filesystems * Bug #50246: mds: failure replaying journal (EMetaBlob) * Bug #50266: "ceph fs snapshot mirror daemon status" should not use json keys as value * Cleanup #50268: mgr/dashboard: evaluate upgrade to Angular 11 * Feature #50278: mgr/pybind: add support for common sqlite3 databases * Bug #50280: cephadm: RuntimeError: uid/gid not found * Bug #50281: qa: untar_snap_rm timeout * Bug #50293: rgw: radoslist incomplete multipart parts marker * Bug #50298: libcephfs: support file descriptor based *at() APIs * Bug #50304: pybind/mgr/devicehealth: scrape-health-metrics command accidentally renamed to scrape-daemon-health-metrics * Bug #50305: MDS doesn't set fscrypt flag on new inodes with crypto context in xattr buffer * Bug #50307: SimpleRADOSStriper: use debug_cephsqlite for SimpleRadosStriper dout * Feature #50312: mgr/dashboard: create OSD directly from device in Inventory * Cleanup #50313: mgr/dashboard: Do not rely on /dev/sdx * Cleanup #50315: mgr/dashboard: refactor Configuration table * Cleanup #50316: mgr/dashboard: Edit EC profile: hide plugin lib directory * Feature #50317: mgr/dashboard: Basic/Advanced mode * Bug #50319: mgr/dashboard: fix HAProxy (now called ingress) * Feature #50320: mgr/dashboard: Lean Dashboard * Feature #50321: mgr/dashboard: cephadm service spec schemas * Feature #50322: mgr/dashboard: add restart/reload daemons * Cleanup #50323: mgr/dashboard: refactor service table * Feature #50324: mgr/dashboard: RGW server-side encryption * Feature #50325: mgr/dashboard: add bucket notifications * Tasks #50326: mgr/dashboard: Policies * Feature #50327: mgr/dashboard: add/edit lifecycle policy * Tasks #50328: mgr/dashboard: RGW * Cleanup #50329: mgr/dashboard: provide URL to RGW daemon * Cleanup #50330: mgr/dashboard: rearrange RGW realms, zones, etc. * Feature #50331: mgr/dashboard: CephFS scheduled snapshots * Feature #50332: mgr/dashboard: CephFS volumes, subvolume and subvolume groups * Feature #50333: mgr/dashboard: CephFS mirroring * Feature #50334: mgr/dashboard: cephfs-top and fsstats module * Tasks #50335: mgr/dashboard: Workflows * Tasks #50340: mgr/dashboard: CephFS * Cleanup #50341: mgr/dashboard: typo: "Filesystems" to "File Systems" * Bug #50342: test: compile errors * Bug #50347: systemd: `ceph-osd@.service` Failed with `ProtectClock=true` * Feature #50360: Configure the IP address for Ganesha * Cleanup #50363: rgw: during reshard lock contention, adjust logging * Feature #50372: test: Implement cephfs-mirror trasher test for HA active/active * Bug #50374: ERROR: test_version (tasks.mgr.dashboard.test_api.VersionReqTest) mgr/dashboard: short_description * Bug #50378: DecayCounter: Expected: (std::abs(total-expected)/expected) < (0.01), actual: 0.0166296 vs 0.01 * Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log" * Bug #50390: mds: monclient: wait_auth_rotating timed out after 30 * Bug #50410: Remove erroneous elements in hosts-overview Grafana dashboard * Bug #50432: rgw: allow rgw-orphan-list to process multiple data pools * Bug #50433: mds: Error ENOSYS: mds.a started profiler * Bug #50440: mgr/dashboard: duplicated NFS export rows on orchestrator-managed NFS clusters. * Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots * Bug #50447: cephfs-mirror: disallow adding a active peered file system back to its source * Feature #50470: cephfs-top: multiple file system support * Bug #50472: orchestrator doesn't provide a way to remove an entire cluster * Fix #50484: mgr/dashboard: set required env. variables in run-backend-api-tests.sh * Feature #50486: mgr/dashboard: expose CLI-like interface (tool-box) * Feature #50487: mgr: expose radosgw-admin from ceph CLI * Feature #50489: mgr/dashboard: export Dashboard usage analytics to Telemetry * Tasks #50490: mgr/dashboard: native dependencies * Bug #50491: mgr/dashboard: centralized logging * Feature #50492: mgr/dashboard: add support for Ceph auth: cephx, user/caps management * Feature #50493: mgr/dashboard: alert customization * Feature #50494: mgr/dashboard: support cephfs-shell * Bug #50496: mgr/dashboard: manage NFS exports through NFS module. * Bug #50503: tasks/libcephsqlite throws "std::out_of_range" * Bug #50514: mgr/dashboard: RGW buckets async validator slow performance * Cleanup #50515: mgr/dashboard: generate manifest.txt file for npm dependencies * Bug #50516: mgr/dashboard: bucket name constraints * Bug #50519: "ceph dashboard set-ssl-certificate{,-key} -i" is trying to decode an already decoded string. * Bug #50520: slow radosgw-admin startup when large value of rgw_gc_max_objs configured * Bug #50523: Mirroring path "remove" don't not seem to work * Bug #50530: pacific: client: abort after MDS blocklist * Bug #50532: mgr/volumes: hang when removing subvolume when pools are full * Cleanup #50540: Change sso handle command with CLICommand * Bug #50545: mgr/dashboard: fix bucket versioning when locking is enabled * Feature #50557: mgr/dashboard/mon: display election_strategy on monitors page * Bug #50559: session dump includes completed_requests twice, once as an integer and once as a list * Bug #50561: cephfs-mirror: incrementally transfer snapshots whenever possible * Tasks #50564: mgr/dashboard: Add a welcome page for the Create Cluster Workflow * Tasks #50565: mgr/dashboard: Add host section for the Create Cluster Workflow * Tasks #50566: mgr/dashboard: Review Section for the Create Cluster Workflow * Bug #50567: mgr/dashboard: add Services e2e tests * Bug #50568: mgr/dashboard: ingress service creation follow-up * Bug #50580: mgr/dashboard:OSDs placement text is unreadable * Feature #50581: cephfs-mirror: allow mirror daemon to connect to local/primary cluster via monitor address * Backport #50584: pacific: Dashboard grafana panel broken if hostname similar to each other * Backport #50586: octopus: Dashboard grafana panel broken if hostname similar to each other * Bug #50588: mgr/Dashboard: right Navigation should work on click when page width is less than768 px * Bug #50591: mgr/progress: progress can be negative * Bug #50620: rgw: radosgw-admin bucket rm --bucket --purge-objects * Bug #50621: rgw: fix bucket object listing when initial marker matches prefix * Bug #50622: msg: active_connections regression * Bug #50665: UnboundLocalError: local variable 'tags' referenced before assignment * Bug #50676: mgr/dashboard: Grafana dashboards not working with NMVe * Bug #50684: mgr/dashboard: fix base-href: revert it to previous approach * Bug #50686: mgr/dashboard: Physical Device Performance grafana graphs for OSDs do not display * Bug #50707: mds: 32bit compilation fixes for PurgeQueue * Bug #50740: crash: PyDict_SetItem() * Bug #50743: *: crash in pthread_getname_np * Bug #50744: mds: journal recovery thread is possibly asserting with mds_lock not locked * Bug #50762: mgr/dashboard: Modify the Validator logic to return error messages along with the error * Bug #50783: mgr/nfs: cli is broken as cluster id and binding arguments are optional * Cleanup #50799: mgr/dashboard: misleading OSD IO line charts when OSD down * Cleanup #50800: mgr/dashboard: upgrade frontend deps due to security vulnerabilities * Bug #50807: mds: MDSLog::journaler pointer maybe crash with use-after-free * Bug #50808: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the second set but not the first: * Bug #50814: mds cpu_profiler asok_command crashes * Cleanup #50816: mgr/nfs: add nfs to mypy * Bug #50819: mon,doc: deprecate min_compat_client * Bug #50822: qa: testing kernel patch for client metrics causes mds abort * Bug #50824: qa: snaptest-git-ceph bus error * Cleanup #50827: pybind/mgr/CMakeLists.txt: exclude files not used at runtime * Bug #50834: MDS heartbeat timed out between during executing MDCache::start_files_to_recover() * Bug #50840: mds: CephFS kclient gets stuck when getattr() on a certain file * Bug #50852: mds: remove fs_name stored in MDSRank * Bug #50853: libcephsqlite: Core dump while running test_libcephsqlite.sh. * Bug #50855: mgr/dashboard: API Version changing doesnt affect to pre-defined methods * Bug #50858: mgr/nfs: skipping conf file or passing empty file throws traceback * Documentation #50865: doc: move mds state diagram .dot into rst * Bug #50867: qa: fs:mirror: reduced data availability * Bug #50870: qa: test_full: "rm: cannot remove 'large_file_a': Permission denied" * Bug #50880: Supports specifying osd_id that is not exist and is not in destroyed. * Documentation #50904: mgr/nfs: add nfs-ganesha config hierarchy * Bug #50909: mgr/dashboard: create NFS export > RGW tenanted user id shown without tenant prefix * Bug #50918: mgr/dashboard: 'grafana dashboards update' command fails with 'Dashboard not found' * Bug #50946: mgr/stats: exception ValueError in perf stats * Feature #50972: mgr/nfs: implement 'nfs cluster update' * Bug #50976: mds: scrub error on inode 0x1 * Feature #50980: mgr/dashboard: advanced cluster visualization * Bug #50984: qa: test_full multiple the mon_osd_full_ratio twice * Feature #51020: telemetry activate: only show ident fields when ident is check * Bug #51023: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3) * Bug #51028: device zap doesn't perform any checks * Feature #51050: mgr/dashboard: support mclock profiles * Bug #51060: qa: test_ephemeral_pin_distribution failure * Bug #51062: mds,client: suppport getvxattr RPC * Bug #51066: mgr/dashboard: fix rgw-bucket async validation * Bug #51067: mds: segfault printing unknown metric * Bug #51069: mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush * Bug #51073: prometheus config uses a path_prefix causing alert forwarding to fail * Bug #51077: MDSMonitor: crash when attempting to mount cephfs * Bug #51078: rgw: completion of multipart upload leaves delete marker * Fix #51082: qa: update RHEL to 8.4 * Bug #51113: mds: unknown metric type is always -1 * Feature #51118: mgr/dashboard: display cluster logs' cephadm channel * Bug #51119: mgr/dashboard: table hiding selections change automatically * Bug #51145: pybind/cmd_argparse: does not process vector of choices * Bug #51146: qa: scrub code does not join scrubopts with comma * Bug #51154: mgr/dashboard: stats=false not working when listing buckets * Feature #51161: mgr/telemetry: pass leaderboard flag even w/o ident * Feature #51162: mgr/volumes: `fs volume rename` command * Cleanup #51164: mgr/dashboard: bucket details: show lock retention period only in days * Bug #51165: mgr/telegraf: telegraf plugin not starting and causing mgr process to crash * Bug #51181: Add systemd-udev require * Bug #51182: pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' * Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' * Bug #51184: qa: fs:bugs does not specify distro * Documentation #51187: doc: pacific updates * Bug #51204: cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror service status is misleading * Documentation #51214: config manage_etc_ceph_ceph_conf_hosts is typod * Bug #51226: qa: import CommandFailedError from teuthology.exceptions * Bug #51249: rgw: when an already-deleted object is removed in a versioned bucket, an unneeded delete marker is created * Bug #51250: qa: fs:upgrade uses teuthology default distro * Bug #51253: rgw: add function entry logging to make more thorough and consistent * Bug #51256: pybind/mgr/volumes: purge queue seems to block operating on cephfs connection required by dispatch thread * Bug #51271: mgr/volumes: use a dedicated libcephfs handle for subvolume API calls * Bug #51276: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-op case * Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)" * Feature #51302: mgr/cephadm: automatically configure dashboard <-> RGW connection * Feature #51303: mgr: radosgw: include realm_{id,name} in service map * Bug #51317: Objects not synced if reshard is done while sync is happening in Multisite * Bug #51318: cephfs-mirror: do not terminate on SIGHUP * Cleanup #51319: deps: upgrade to Python 3.8 * Feature #51333: qa: use cephadm to provision cephfs for fs:workloads * Feature #51340: mon/MDSMonitor: allow creating a file system with a specific fscid * Bug #51344: vstart_runner: log level gets set to INFO when --debug and --clear-old-log are passed * Bug #51357: osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS crash * Bug #51369: vstart_runner: use FileNotFoundError instead of OSError * Bug #51372: pacific: libcephsqlite: segmentation fault * Bug #51375: common/options: min value parsing does not work for millisecond option * Feature #51378: mgr/dashboard: Add the ability to filter using labels in osd creation form * Cleanup #51380: mgr/volumes/module.py: fix various flake8 issues * Cleanup #51381: mgr/volumes/fs/async_job.py: fix various flake8 issues * Cleanup #51382: mgr/volumes/fs/async_cloner.py: fix various flake8 issues * Cleanup #51384: mgr/volumes/fs/vol_spec.py: fix various flake8 issues * Cleanup #51385: mgr/volumes/fs/fs_util.py: add extra blank line * Cleanup #51387: mgr/volumes/fs/purge_queue.py: add extra blank line * Cleanup #51390: mgr/volumes/fs/operations/access.py: fix various flake8 issues * Cleanup #51391: mgr/volumes/fs/operations/resolver.py: add extra blank line * Cleanup #51392: mgr/volumes/fs/operations/snapshot_util.py: add extra blank line * Cleanup #51393: mgr/volumes/fs/operations/group.py: add extra blank line * Cleanup #51396: mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues * Cleanup #51398: mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues * Cleanup #51400: mgr/volumes/fs/operations/trash.py: fix various flake8 issues * Cleanup #51402: mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flake8 issues * Cleanup #51403: mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake8 issues * Cleanup #51406: mgr/volumes/fs/operations/versions/op_sm.py: fix various flake8 issues * Cleanup #51407: mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various flake8 issues * Feature #51408: mgr/dashboard: Add configurable MOTD or wall notification * Bug #51417: qa: test_ls_H_prints_human_readable_file_size failure * Bug #51420: radosgw-admin core dumps on "bucket sync status" * Bug #51427: Multisite sync stuck if reshard is done while bucket sync is disabled * Documentation #51428: mgr/nfs: move nfs doc from cephfs to mgr * Bug #51429: radosgw-admin bi list fails with Input/Output error * Bug #51449: mgr/dashboard: upgrade Grafana to 6.7.6 * Bug #51461: Unable to read bucket stats post reshard from Multisite primary * Bug #51462: rgw: resolve empty ordered bucket listing results w/ CLS filtering * Bug #51476: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mirror daemon is always running * Cleanup #51479: mgr/dashboard: NFS clean-ups * Bug #51486: Incorrect stats on versioned bucket on multisite * Bug #51487: Sync stopped from primary to secondary post reshard * Feature #51518: client: flush the mdlog in unsafe requests' relevant and auth MDSes only * Bug #51519: ceph-dencoder unable to load dencoders from "lib64/ceph/denc". it is not a directory. * Cleanup #51543: mds: improve debugging for mksnap denial * Bug #51560: the root cause of rgw.none appearance * Bug #51589: mds: crash when journaling during replay * Bug #51591: src/ceph-crash.in: various enhancements and fixes * Bug #51595: Incremental sync fails to complete post reshard on a bucket ownership changed. * Bug #51600: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate are not updated * Bug #51610: Fix non POSIX use of sigval_t * Subtask #51612: mgr/dashboard: cephadm-e2e script: improvements * Cleanup #51614: mgr/nfs: remove dashboard test remnant from unit tests * Bug #51620: Ceph orch upgrade to 16.2.5 fails * Bug #51630: mgr/snap_schedule: don't throw traceback on non-existent fs * Cleanup #51631: mgr/devicehealth: instead of CLICommand use CLIReadCommand * Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 * Documentation #51683: mgr/nfs: add note about creating exports for nfs using vstart to developer guide * Bug #51705: qa: tasks.cephfs.fuse_mount:mount command failed * Bug #51707: pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale ones * Bug #51712: radosgw-admin should print error on missing --end-marker argument * Feature #51716: Add option in `fs new` command to start rank 0 in failed state * Bug #51720: Compilation of FreeBSD fails due to missing LIBAIO * Bug #51722: mds: slow performance on parallel rm operations for multiple kclients * Bug #51728: mgr/dashbord: force maintenance e2e failing for host * Bug #51732: rgw: `radosgw-admin bi list ...` can result in an I/O Error * Feature #51787: mgr/nfs: deploy nfs-ganesha daemons on non-default port * Bug #51794: mgr/test_orchestrator: remove pool and namespace from nfs service * Bug #51795: mgr/nfs:update pool name to '.nfs' in vstart.sh * Bug #51800: mgr/nfs: create rgw export with vstart * Bug #51805: pybind/mgr/volumes: The cancelled clone still goes ahead and complete the clone * Bug #51811: ceph-volume lvm migrate without args raises an exception * Bug #51814: documentation: add lvm new-db, new-wal and migrate * Bug #51842: upmap verify failed with pool size decreased * Bug #51854: ceph-volume lvm migrate can't work with container * Bug #51857: client: make sure only to update dir dist from auth mds * Bug #51870: pybind/mgr/volumes: first subvolume permissions set perms on /volumes and /volumes/group * Bug #51905: qa: "error reading sessionmap 'mds1_sessionmap'" * Bug #51934: rgw-multisite: metadata conflict not computed correctly * Bug #51941: rgw: user stats showing 0 value for "size_utilized" and "size_kb_utilized" fields * Bug #51956: mds: switch to use ceph_assert() instead of assert() * Bug #51957: mgr/dashboard: explore Grafana 7 or 8 migration * Bug #51975: pybind/mgr/stats: KeyError * Bug #51989: cephfs-mirror: cephfs-mirror daemon status for a particular FS is not showing * Bug #52001: libcephsqlite: CheckReservedLock the result will always be zero * Bug #52002: mgr/dashboard: dashboard 16.2.5 unable to ipv6 wildcard bind * Bug #52022: mgr/dashboard: error on showing rgw svc. perf. counters * Bug #52023: kv/RocksDBStore: enrich debug message * Feature #52038: cephadm gather-facts should provide a list of LISTENing ports on the host * Bug #52039: cephadm rm-cluster should check whether the given fsid exists * Bug #52040: during an apply the host must be online otherwise the apply fails with a traceback * Bug #52041: `orch ps` shows wrong ports for MGR * Bug #52042: After deployment the example of cephadm shell invocation is overly complex * Bug #52062: cephfs-mirror: terminating a mirror daemon can cause a crash at times * Fix #52068: qa: add testing for "ms_mode" mount option * Subtask #52082: mgr/dashboard: cephadm e2e start script: add "--expanded" option * Bug #52086: mgr/dashboard: generates gitleaks false positive * Bug #52094: Tried out Quincy: All MDS Standby * Fix #52104: qa: add testing for "copyfrom" mount option * Bug #52123: mds sends cap updates with btime zeroed out * Cleanup #52130: mgr/dashboard: tox.ini: delete useless env. 'apidocs' * Bug #52274: mgr/nfs: add more log messages * Bug #52288: doc: clarify use of `rados rm` command * Bug #52290: rgw fix sts memory leak * Bug #52315: rgw: fix bucket index list test error * Bug #52382: mds,client: add flag to MClientSession for reject reason * Bug #52386: client: fix dump mds twice * Feature #52387: backport-create-issue: set the priority of the backport issue * Bug #52388: mgr/snap-schedule: retention set calculation for multiple retention specs is wrong * Bug #52428: radosgw-admin:Ambiguous options “-i” * Bug #52436: fs/ceph: "corrupt mdsmap" * Bug #52438: qa: ffsb timeout * Bug #52482: pvs --no-heading fail * Bug #52487: qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation) * Feature #52491: mds: add max_mds_entries_per_dir config option * Bug #52504: raw: list is failing when using logical partition on the host * Bug #52505: mgr/dashboard: improve formatting of histograms in Telemetry preview form * Bug #52507: npm fails on alpine linux trying to install fsevents * Feature #52510: Speed up inventory processing to reduce run time * Bug #52512: prometheus module: add used_bytes metric back * Bug #52522: rbd children:logging crashes after open or close fails. * Bug #52550: cephadm:using --single-host-defaults with bootstrap results in health warn state * Feature #52558: mgr/dashboard: display cephadmin config checks * Bug #52565: MDSMonitor: handle damaged state from standby-replay * Bug #52572: "cluster [WRN] 1 slow requests" in smoke pacific * Documentation #52577: alertmanager configuration prevents users from adding their own escalation process * Cleanup #52589: mgr/dashboard: clean up controllers * Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested * Cleanup #52592: mgr/dashboard: add more linters/checkers to back-end * Feature #52594: rgw: add logging to bucket listing so calls are better understood * Bug #52606: qa: test_dirfrag_limit * Bug #52607: qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" * Bug #52619: ceph-volume lvm activate command doesn't parse args properly * Bug #52625: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) * Feature #52638: mgr/prometheus: Add all healthchecks to prometheus output and provide a way of viewing history * Bug #52642: snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properly * Bug #52654: pybind/mgr/cephadm: mds upgrade does not disable standby-replay * Feature #52661: Improve performance of cephadm ls command * Bug #52677: qa: test_simple failure * Bug #52684: Observing sync inconsistencies on a bucket that has been resharded. * Feature #52708: Add SNMP MIB to the monitoring components within the core ceph project * Bug #52723: mds: improve mds_bal_fragment_size_max config option * Feature #52725: qa: mds_dir_max_entries workunit test case * Bug #52730: ceph-volume mis-calculates db/wal slot size for clusters that have multiple PVs in a VG * Feature #52740: cephadm: make tcmu-runner log available from the host * Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6 * Bug #52821: qa/xfstest-dev.py: update to include centos stream * Bug #52822: qa: failed pacific install on fs:upgrade * Fix #52824: qa: skip internal metadata directory when scanning ceph debugfs directory * Tasks #52851: mgr/dashboard: remove (old) Ceph version from hosts * Bug #52873: ERROR: failed to list reshard log entries, oid=reshard.0000000000 marker= (2) No such file or directory * Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6 * Bug #52887: qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) * Bug #52894: EPM caused crash * Bug #52896: rgw-multisite: Dynamic resharding take too long to take effect * Bug #52897: seastore crash when the segment cleaner tries to release segments * Bug #52904: zram devices should not be displayed as disk devices by ceph-volume * Bug #52905: cephadm gather-facts is returning zram* devices as a valid block device * Bug #52906: cephadm rm-daemon is not closing any tcp ports that were opened for the daemon during the removal process * Bug #52907: dpdk stack segment fault * Bug #52908: mpath devices aren't supported in all scenarios * Bug #52917: rgw-multisite: bucket sync checkpoint for a bucket lists out very high value/incorrect for local gen. * Bug #52919: ceph orch device zap validation can result in osd issues and problematic error messages * Feature #52920: Add snmp-gateway as a supported service for deloyment via orchestrator * Bug #52928: mgr: when importing from NFS module in Dashboard module, Dashboard module is IMPERSONATING the NFS module * Feature #52942: mgr/nfs: add 'nfs cluster config get' * Feature #52945: mgr/dashboard: improve CephFS grafana * Feature #52947: mgr/dashboard: display how many "primary pgs" each OSD has * Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up" * Bug #52949: RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'} * Bug #52975: MDSMonitor: no active MDS after cluster deployment * Feature #52977: RFE: Improve ceph orch device ls to provide visibility of fsid * Bug #52994: client: do not defer releasing caps when revoking * Bug #52995: qa: test_standby_count_wanted failure * Bug #52996: qa: test_perf_counters via test_openfiletable * Documentation #53004: Improve API documentation for struct ceph_client_callback_args * Bug #53039: osd: ceph osd stop does not take effect * Bug #53043: qa/vstart_runner: tests crashes due incompatiblity * Bug #53045: stat->fsid is not unique among filesystems exported by the ceph server * Bug #53047: cmake command not found in the standalone cluster to execute cmake -DWITH_SEASTAR=ON .. command * Bug #53074: pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active * Bug #53081: mgr/dashboard: cephadm e2e tests failure on master * Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map * Bug #53083: mgr/dashboard: nfs export creation form: do not allow pseudo already in use * Bug #53085: os/bluestore: Improve _block_picker function * Bug #53086: os/bluestore/AvlAllocator: specialize _block_picker() and cleanups * Bug #53087: os/bluestore/AvlAllocator: introduce bluestore_avl_alloc_ff_max_* options * Cleanup #53127: mgr/dashboard: improve SAML2 SSO (Cephadm) set-up * Bug #53128: mgr/dashboard: Cluster Expansion - Review Section: fixes and improvements * Bug #53131: mgr/dashboard: WITH_MGR_DASHBOARD_FRONTEND fails when no pre-built package is provided * Feature #53135: mgr/dashboard: alert doc link provided as a test field, not hyperlink * Bug #53144: mgr/dashboard: weird data in monitoring > Alerts details * Bug #53150: pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5 * Bug #53155: MDSMonitor: assertion during upgrade to v16.2.5+ * Tasks #53159: mgr/dashboard: NFS service: allow service port selection * Bug #53172: after user created, stating exists user, noticing "User has not been initialized or user does not exist" * Bug #53181: rgw: wrong UploadPartCopy error code when src object not exist and src bucket not exist * Bug #53194: mds: opening connection to up:replay/up:creating daemon causes message drop * Bug #53209: mgr/dashboard: Device health status is not getting listed under hosts section * Feature #53211: mgr/dashboard: Add Grafana unit testing * Bug #53214: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory" * Bug #53216: qa: "RuntimeError: value of attributes should be either str or None. client_id" * Bug #53222: rgw: investigate conditional governing cleaning of incomplete multipart uploads * Bug #53223: Fun with subinterpreters: Exceptions showing traceback when SpecValidationError raised from service_spec.py module * Bug #53227: osdc: bh split will lost error number, maybe cause client crash * Feature #53229: mgr/prometheus: Make prometheus standby behaviour configurable * Documentation #53236: doc: ephemeral pinning with subvolumegroups * Bug #53274: mgr/dashboard: evaluate upgrading the npm packages * Cleanup #53275: mgr/dashboard: revisit installing Grafana Dashboards from RPM * Bug #53281: Windows IPv6 support * Bug #53293: qa: v16.2.4 mds crash caused by centos stream kernel * Subtask #53301: mgr/dashboard: rgw daemon list: add realm column * Bug #53317: mgr/dashboard: API docs UI does not work with Angular dev server * Bug #53318: mgr/dashboard: UI API endpoints are not listen in API docs anymore * Bug #53351: mgr/cephadm: when the cephadm agent refreshes the mgr host case, the config checker can throw RuntimeError: dictionary changed size during iteration * Cleanup #53357: mgr/dashboard: upgrade Cypress to the latest stable version * Bug #53379: mgr/prometheus: method timings are not present in the exported metrics * Bug #53385: Allow mgr/cephadm to run radosgw-admin. * Bug #53393: mgr: unsafe locking in MetadataUpdate::finish * Cleanup #53395: mgr/dashboard: cluster > hosts display total number of cores * Subtask #53398: mgr/dashboard: Check overall physical growth rate * Feature #53399: Provide container resources usage * Bug #53436: mds, mon: mds beacon messages get dropped? (mds never reaches up:active state) * Bug #53451: run-tox-grafana-query-test fails on arm64 * Bug #53452: cli: ceph orch host ls adds extraneous strings to json output * Bug #53483: Bluestore: Function not available for other platforms * Bug #53487: qa: mount error 22 = Invalid argument * Bug #53497: rgw pubsub list topics coredump * Bug #53509: quota support for subvolumegroup * Bug #53521: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin * Bug #53526: mgr/dashboard: dashboard: offline hosts showing UI bug * Bug #53542: Ceph Metadata Pool disk throughput usage increasing * Bug #53559: Balancer bug - very slow performance (minutes) in some cases * Subtask #53561: mgr: TTL cache implementation * Feature #53571: test/allocator_replay_test: Add replay_alloc option * Bug #53591: mgr/dashboard: NaN Undefined - Pools Read_ops and Write_ops * Fix #53596: rgw:cleanup/refactor json and xml encoders and decoders * Bug #53603: mgr/telemetry: list index out of range in gather_device_report * Bug #53604: mgr/telemetry: list assignment index out of range in gather_crashinfo * Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy" * Bug #53619: mds: fails to reintegrate strays if destdn's directory is full (ENOSPC) * Bug #53623: mds: LogSegment will only save one ESubtreeMap event if the ESubtreeMap event size is large enough. * Bug #53641: mds: recursive scrub does not trigger stray reintegration * Bug #53667: osd cannot be started after being set to stop * Bug #53675: jenkins api test failure: No module named 'setuptools._distutils' * Cleanup #53686: mgr/dashboard/monitoring: add prometheus exporter API test * Bug #53705: rgw: in bucket reshard list, clarify new num shards is tentative * Bug #53726: mds: crash when `ceph tell mds.0 dump tree ''` * Bug #53741: crash just after MDS become active * Bug #53750: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock)) * Bug #53753: mds: crash (assert hit) when merging dirfrags * Subtask #53756: mgr/dashboard: add test coverage for API docs (SwaggerUI) * Bug #53765: mount helper mangles the new syntax device string by qualifying the name * Bug #53805: mds: seg fault in expire_recursive * Bug #53813: mgr/dashboard: NFS pages shows 'Page not found' * Bug #53857: qa: fs:upgrade test fails mds count check * Bug #53858: mgr/dashboard: failed to load smart data when a device has only 1 daemon associated * Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) * Bug #53862: mds: remove the duplicated or incorrect respond when the pool is full * Bug #53874: rgw: "bucket check --fix" should delete damaged multipart uploads from bi * Bug #53887: build - CentOS 8: No matching package to install: 'arrow-devel >= 4.0.0' * Feature #53903: mount: add option to support fake mounts * Bug #53911: client: client session state stuck in opening and hang all the time * Backport #53975: quincy: mgr/dashboard: Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log * Backport #53976: quincy: KeyError in _process_pg_summary * Bug #53991: mgr/dashboard: cephadm e2e job: display more info on error * Bug #54059: [crypto/qat][compressor] QAT driver cannot work with encryption and compression for RGW * Backport #54073: quincy: rgw: fix bucket index list minor calculation bug * Bug #54188: Setting too many PGs leads error handling overflow * Backport #54234: quincy: qa: use cephadm to provision cephfs for fs:workloads * Bug #54266: rgw: cmake configure error on fedora-37/rawhide * Feature #54322: mgr/prometheus: Add missing OSDMonitor metrics to prometheus exporter * Bug #54414: quincy compilation failure for alpine linux * Bug #54415: Quincy 32bit compilation failure with deduction/substitution failed at BlueStore.cc:1855 * Bug #54422: ceph-crash: fix regression in crash collector * Bug #54424: build: ninja: error: dependency cycle: src/ceph-volume/setup.py -> src/ceph-volume/setup.py * Bug #54473: cmake: pmdk fails to compile on Centos Stream 9 * Bug #54514: build: LTO can cause false positives in cmake tests resulting in build failures * Bug #54545: when nasm is used on alpine linux -Os is passed so it fails * Bug #54549: Feedback module not exist in Cluster-> Manager Modules. * Bug #54559: support daemon actions even in Hosts daemons panel * Bug #54561: 5 out of 6 OSD crashing after update to 17.1.0-0.2.rc1.fc37.x86_64 * Feature #55126: mds: add perf counter to record slow replies * Bug #55155: grafana/Makefile: don't push image to docker * Bug #55195: mgr/dashboard: update grafana piechart and vonage status panel versions * Bug #55202: grafana/Makefile: don't push image to docker * Cleanup #55204: mgr/dashboard: update grafana piechart and vonage status panel versions * Backport #55210: grafana/Makefile: don't push image to docker * Bug #55256: Build failure with gcc-12.0.1 (global_legacy_options.h not found) * Bug #55257: mgr module rook crashed in daemon mgr * Bug #55288: rgw/dbstore: handle prefix/delim in Bucket::list operation * Bug #55367: grafana/Makefile: don't push image to docker * Cleanup #55369: mgr/dashboard: update grafana piechart and vonage status panel versions * Bug #55390: rgw-ms/resharding: Observing sync inconsistencies ~50K out of 20M objects, did not sync. * Bug #55407: quincy osd's fail to boot and crash * Bug #55410: [rgw-ms][dbr]:Large omap object found in the buckets.index pool, corresponding to resharded buckets * Bug #55420: 17.2.0 build failure on Alpine linux * Bug #55454: undefined reference to `RGWObjState::~RGWObjState()' * Bug #55533: OSD down and purge inconsistent * Bug #55549: OSDs crashing * Bug #55577: OSD crashes on devicehealth scraping * Bug #55617: cephadm shell container problem * Bug #55675: [rgw-ms][dbr]:50K object did not sync to the secondary after doing a 20M workload on a versioned bucket. * Bug #55730: rocksdb: build with rocksdb-7.y.z * Bug #55840: windows clients unable to perform IO to clusters with over 200+ OSDs * Bug #55979: [rgw-multisite][hybrid workload]:1 object failed to delete on secondary for a bucket 'con2'. * Bug #56078: [cephadm/quincy] agent service getting deployed with unknown info * Feature #56433: Add ceph-mib subpackage to install SNMP MIB file * Documentation #56521: Quincy release notes - corrections/clarifications