# v20.0.0 T release * Cleanup #4744: mds: pass around LogSegments via std::shared_ptr * Feature #7320: qa: thrash directory fragmentation * Feature #10679: Add support for the chattr +i command (immutable file) * Feature #18154: qa: enable mds thrash exports tests * Bug #23565: Inactive PGs don't seem to cause HEALTH_ERR * Bug #23723: qa: incorporate smallfile workload * Bug #40159: mds: openfiletable prefetching large amounts of inodes lead to mds start failure * Bug #40197: The command 'node ls' sometimes output some incorrect information about mds. * Feature #41824: mds: aggregate subtree authorities for display in `fs top` * Feature #44279: client: provide asok commands to getattr an inode with desired caps * Feature #45021: client: new asok commands for diagnosing cap handling issues * Bug #46702: rgw: lc: lifecycle rule with more than one prefix in RGWPutLC::execute() should throw error * Feature #47264: "fs authorize" subcommand should work for multiple FSs too * Bug #47813: osd op age is 4294967296 * Feature #48509: mds: dmClock based subvolume QoS scheduler * Bug #48562: qa: scrub - object missing on disk; some files may be lost * Feature #48704: mds: recall caps proportional to the number issued * Bug #49124: mgr/dashboard: NFS settings aren't updated after modifying them when working with Rook orchestrator * Bug #49615: can't get mdlog when rgw_run_sync_thread = false * Documentation #49649: add information on the system objects holding notifications * Bug #50261: rgw: system users can't issue role policy related ops without explicit user policy * Bug #50821: qa: untar_snap_rm failure during mds thrashing * Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details * Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings * Bug #52513: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15 * Bug #52581: Dangling fs snapshots on data pool after change of directory layout * Bug #52846: octopus: mgr fails and freezes while doing pg dump * Feature #54525: osd/mon: log memory usage during tick * Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long) * Feature #55214: mds: add asok/tell command to clear stale omap entries * Feature #55414: mds:asok interface to cleanup permanently damaged inodes * Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command * Bug #55464: cephfs: mds/client error when client stale reconnect * Feature #56428: add command "fs deauthorize" * Feature #56442: mds: build asok command to dump stray files and associated caps * Feature #56489: qa: test mgr plugins with standby mgr failover * Feature #57481: mds: enhance scrub to fragment/merge dirfrags * Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'} * Bug #57682: client: ERROR: test_reconnect_after_blocklisted * Feature #58193: mds: remove stray directory indexes since stray directory can fragment * Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan) * Bug #58274: BlueStore::collection_list becomes extremely slow due to unbounded rocksdb iteration * Feature #58488: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event * Bug #58878: mds: FAILED ceph_assert(trim_to > trimming_pos) * Bug #58938: qa: xfstests-dev's generic test suite has 7 failures with kclient * Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client * Bug #58962: ftruncate fails with EACCES on a read-only file created with write permissions * Bug #59119: mds: segmentation fault during replay of snaptable updates * Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode * Bug #59169: Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) * Bug #59301: pacific (?): test_full_fsync: RuntimeError: Timed out waiting for MDS daemons to become healthy * Bug #59394: ACLs not fully supported. * Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2 * Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output error)" * Bug #59688: mds: idempotence issue in client request * Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed * Feature #61334: cephfs-mirror: use snapdiff api for efficient tree traversal * Bug #61357: cephfs-data-scan: parallelize cleanup step * Documentation #61375: doc: cephfs-data-scan should discuss multiple data support * Documentation #61377: doc: add suggested use-cases for random emphemeral pinning * Bug #61407: mds: abort on CInode::verify_dirfrags * Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces * Feature #61777: mds: add ceph.dir.bal.mask vxattr * Bug #61790: cephfs client to mds comms remain silent after reconnect * Bug #61791: snaptest-git-ceph.sh test timed out (job dead) * Bug #61831: qa: test_mirroring_init_failure_with_recovery failure * Feature #61863: mds: issue a health warning with estimated time to complete replay * Documentation #61902: Recommend pinning _deleting directory to another rank for certain use-cases * Feature #61903: pybind/mgr/volumes: add config to turn off subvolume deletion * Feature #61904: pybind/mgr/volumes: add more introspection for clones * Feature #61905: pybind/mgr/volumes: add more introspection for recursive unlink threads * Bug #61945: LibCephFS.DelegTimeout failure * Bug #61978: cephfs-mirror: support fan out setups * Bug #61982: Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) * Bug #62067: ffsb.sh failure "Resource temporarily unavailable" * Bug #62123: mds: detect out-of-order locking * Bug #62126: test failure: suites/blogbench.sh stops running * Feature #62157: mds: working set size tracker * Bug #62158: mds: quick suspend or abort metadata migration * Tasks #62159: qa: evaluate mds_partitioner * Bug #62188: AttributeError: 'RemoteProcess' object has no attribute 'read' * Feature #62207: Report cephfs-nfs service on ceph -s * Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.TestMirroring) * Bug #62257: mds: blocklist clients that are not advancing `oldest_client_tid` * Bug #62344: tools/cephfs_mirror: mirror daemon logs reports initialisation failure for fs already deleted post test case execution * Bug #62381: mds: Bug still exists: FAILED ceph_assert(dir->get_projected_version() == dir->get_version()) * Bug #62484: qa: ffsb.sh test failure * Bug #62485: quincy (?): pybind/mgr/volumes: subvolume rm timeout * Bug #62511: src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down) * Documentation #62605: cephfs-journal-tool: update parts of code that need mandatory --rank * Bug #62648: pybind/mgr/volumes: volume rm freeze waiting for async job on fs to complete * Bug #62653: qa: unimplemented fcntl command: 1036 with fsstress * Bug #62664: ceph-fuse: failed to remount for kernel dentry trimming; quitting! * Feature #62668: qa: use teuthology scripts to test dozens of clients * Bug #62673: cephfs subvolume resize does not accept 'unit' * Fix #62712: pybind/mgr/volumes: implement EAGAIN logic for clearing request queue when under load * Feature #62715: mgr/volumes: switch to storing subvolume metadata in libcephsqlite * Bug #62720: mds: identify selinux relabelling and generate health warning * Bug #62764: qa: use stdin-killer for kclient mounts * Bug #62847: mds: blogbench requests stuck (5mds+scrub+snaps-flush) * Feature #62849: mds/FSMap: add field indicating the birth time of the epoch * Feature #62856: cephfs: persist an audit log in CephFS * Bug #63089: qa: tasks/mirror times out * Bug #63104: qa: add libcephfs tests for async calls * Bug #63120: mgr/nfs: support providing export ID while creating exports using 'nfs export create cephfs' * Feature #63191: tools/cephfs: provide an estimate completion time for offline tools * Bug #63212: qa: failed to download ior.tbz2 * Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS * Feature #63374: mds: add asok command to kill/respond to request * Bug #63428: RGW: multipart get wrong storage class metadata * Fix #63432: qa: run TestSnapshots.test_kill_mdstable for all mount types * Bug #63461: Long delays when two threads modify the same directory * Bug #63471: client: error code inconsistency when accessing a mount of a deleted dir * Bug #63473: fsstressh.sh fails with errno 124 * Bug #63494: all: daemonizing may release CephContext:: _fork_watchers_lock when its already unlocked * Bug #63519: ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds * Feature #63544: mgr/volumes: bulk delete canceled clones * Bug #63634: [RFC] limit iov structures to 1024 while performing async I/O * Feature #63663: mds,client: add crash-consistent snapshot support * Feature #63664: mds: add quiesce protocol for halting I/O on subvolumes * Feature #63665: mds: QuiesceDb to manage subvolume quiesce state * Feature #63666: mds: QuiesceAgent to execute quiesce operations on an MDS rank * Feature #63668: pybind/mgr/volumes: add quiesce protocol API * Feature #63670: mds,client: add light-weight quiesce protocol * Bug #63697: client: zero byte sync write fails * Bug #63700: qa: test_cd_with_args failure * Tasks #63707: mds: AdminSocket command to control the QuiesceDbManager * Tasks #63708: mds: MDS message transport for inter-rank QuiesceDbManager communications * Bug #63726: cephfs-shell: support bootstrapping via monitor address * Bug #63764: Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps) * Bug #63791: RGW: a subuser with no permission can still list buckets and create buckets * Bug #63830: MDS fails to start * Bug #63866: mount command returning misleading error message * Documentation #63885: doc: add dedicated section discussing a "damaged" rank * Bug #63896: client: contiguous read fails for non-contiguous write (in async I/O api) * Feature #63928: cephfs_mirror: Enable support for cephfs_mirror in consolidation/archive configurations * Bug #63931: qa: test_mirroring_init_failure_with_recovery failure * Bug #63949: leak in mds.c detected by valgrind during CephFS QA run * Bug #63999: mgr/snap_schedule: clean up schedule timers on volume delete * Bug #64008: mds: CInode::item_caps used in two different lists * Bug #64011: qa: Command failed qa/workunits/suites/pjd.sh * Bug #64015: fscrypt.sh - lsb_release command may not exist * Bug #64064: mds config `mds_log_max_segments` throws error for value -1 * Feature #64101: tools/cephfs: toolify updating mdlog journal pointers to a sane value * Bug #64149: valgrind+mds/client: gracefully shutdown the mds during valgrind tests * Bug #64198: mds: Fcb caps issued to clients when filelock is xlocked * Bug #64298: CephFS metadata pool has large OMAP objects corresponding to strays * Bug #64348: mds: possible memory leak in up:rejoin when opening cap inodes (from OFT) * Bug #64389: client: check if pools are full when mounting * Bug #64390: client: async I/O stalls if the data pool gets full * Bug #64471: kernel: upgrades from quincy/v18.2.[01]/reef to main|squid fail with kernel oops * Bug #64477: pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.XXX' denied * Documentation #64483: doc: document labelled perf metrics for mds/cephfs-mirror * Bug #64486: qa: enhance labeled perf counters test for cephfs-mirror * Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main * Feature #64506: qa: update fs:upgrade to test from reef/squid to main * Feature #64507: pybind/mgr/snap_schedule: support crash-consistent snapshots * Bug #64511: kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on the default column family * Feature #64531: mds,mgr: identify metadata heavy workloads * Bug #64533: BlueFS: l_bluefs_log_compactions is counted twice in sync log compaction * Bug #64537: mds: lower the log level when rejecting a session reclaim request * Bug #64542: Difference in error code returned while removing system xattrs using removexattr() * Bug #64563: mds: enhance laggy clients detections due to laggy OSDs * Bug #64572: workunits/fsx.sh failure * Bug #64602: tools/cephfs: cephfs-journal-tool does not recover dentries with alternate_name * Bug #64616: selinux denials with centos9.stream * Bug #64641: qa: Add multifs root_squash testcase * Bug #64685: mds: disable defer_client_eviction_on_laggy_osds by default * Bug #64700: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth) * Bug #64707: suites/fsstress.sh hangs on one client - test times out * Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring) * Bug #64717: MDS stuck in replay/resolve use * Bug #64729: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log * Bug #64730: fs/misc/multiple_rsync.sh workunit times out * Bug #64746: qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to health ignorelist. * Bug #64747: postgresql pkg install failure * Bug #64751: cephfs-mirror coredumped when acquiring pthread mutex * Bug #64752: cephfs-mirror: valgrind report leaks * Bug #64761: cephfs-mirror: add throttling to mirror daemon ops * Feature #64777: mon: add NVMe-oF gateway monitor and HA * Bug #64799: mgr: update cluster state for new maps from the mons before notifying modules * Fix #64821: cephadm - make changes to ceph-nvmeof.conf template * Bug #64875: rgw: rgw-restore-bucket-index -- sort uses specified temp dir * Bug #64912: make check: QuiesceDbTest.MultiRankRecovery Failed * Bug #64947: qa: fix continued use of log-whitelist * Bug #64968: mon: MON_DOWN warnings when mons are first booting * Bug #64972: qa: "ceph tell 4.3a deep-scrub" command not found * Fix #64984: qa: probabilistically ignore PG_AVAILABILITY/PG_DEGRADED * Bug #64985: qa: mgr logs do not include client debugging * Bug #64986: qa: "cluster [WRN] Health detail: HEALTH_WARN 1 filesystem is online with fewer MDS than max_mds" in cluster log " * Bug #64987: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log " * Bug #64988: qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds" * Bug #65001: mds: ceph-mds might silently ignore client_session(request_close, ...) message * Bug #65018: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)" * Bug #65019: qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log * Bug #65020: qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log * Bug #65021: qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log * Bug #65022: qa: test_max_items_per_obj open procs not fully cleaned up * Bug #65039: mds: standby-replay segmentation fault in md_log_replay * Bug #65043: Unable to set timestamp to value > UINT32_MAX * Bug #65073: pybind/mgr/stats/fs: log exceptions to cluster log * Bug #65094: mds STATE_STARTING won't add root ino for root rank and not correctly handle when fails at STATE_STARTING * Bug #65116: squid: kclient: "ld: final link failed: Resource temporarily unavailable" * Bug #65136: QA failure: test_fscrypt_dummy_encryption_with_quick_group * Bug #65157: cephfs-mirror: set layout.pool_name xattr of destination subvol correctly * Bug #65171: Provide metrics support for the Replication Start/End Notifications * Bug #65182: mds: quiesce_inode op waiting on remote auth pins is not killed correctly during quiesce timeout/expiration * Bug #65216: rgw: only accept valid ipv4 from host header * Bug #65224: mds: fs subvolume rm fails * Bug #65225: ceph_assert on dn->get_projected_linkage()->is_remote * Bug #65246: qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize) * Feature #65259: cephadm - make changes to ceph-nvmeof.conf template * Bug #65260: mds: Reduce log level for messages when mds is stopping * Bug #65262: qa/cephfs: kernel_untar_build.sh failed due to build error * Bug #65263: upgrade stalls after upgrading one ceph-mgr daemon * Bug #65265: qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs * Bug #65271: qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log * Bug #65276: MDS daemon is using 50% CPU when idle * Bug #65277: rgw: update options yaml file so LDAP uri isn't an invalid example * Bug #65301: fs:upgrade still uses centos_8* distro * Bug #65308: qa: fs was offline but also unexpectedly degraded * Bug #65309: qa: dbench.sh failed with "ERROR: handle 10318 was not found" * Bug #65314: valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int) * Feature #65338: Add --continue-on-error for `cephadm bootstrap` * Bug #65342: mds: quiesce_counter decay rate initialized from wrong config * Bug #65345: cephfs_mirror: increment sync_failures when sync_perms() and sync_snaps() fails * Bug #65350: mgr/snap_schedule: restore yearly spec from uppercase Y to lowercase y * Bug #65372: qa: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'} * Bug #65388: The MDS_SLOW_REQUEST warning is flapping even though the slow requests don't go away * Bug #65389: The ceph_readdir function in libcephfs returns incorrect d_reclen value * Fix #65408: qa: under valgrind, restart valgrind/mds when MDS exits with 0 * Bug #65472: mds: avoid recalling Fb when quiescing file * Bug #65494: ceph-mgr critical error: "Module 'devicehealth' has failed: table Device already exists" * Bug #65496: mds: ceph.dir.subvolume and ceph.quiesce.blocked is not properly replicated * Feature #65503: mgr/stats, cephfs-top: provide per volume/sub-volume based performance metrics to monitor / troubleshoot performance issues * Bug #65508: qa: lockup not long enough to for test_quiesce_authpin_wait * Bug #65518: mds: regular file inode flags are not replicated by the policylock * Bug #65536: mds: after the unresponsive client was evicted the blocked slow requests were not successfully cleaned up * Bug #65545: Quiesce may fail randomly with EBADF due to the same root submitted to the MDCache multiple times under the same quiesce request * Bug #65546: quincy|reef: qa/suites/upgrade/pacific-x: failure to pull image causes dead jobs * Bug #65564: Test failure: test_snap_schedule_subvol_and_group_arguments_08 (tasks.cephfs.test_snap_schedules.TestSnapSchedulesSubvolAndGroupArguments) * Feature #65566: Change some default values for OMAP lock parameters in nvmeof conf file * Bug #65572: Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 1 * Fix #65579: mds: use _exit for QA killpoints rather than SIGABRT * Bug #65580: mds/client: add dummy client feature to test client eviction * Bug #65595: mds: missing policylock acquisition for quiesce * Bug #65603: mds: quiesce timeout due to a freezing directory * Bug #65604: dbench.sh workload times out after 3h when run with-quiescer * Bug #65606: workload fails due to slow ops, assert in logs mds/Locker.cc: 551 FAILED ceph_assert(!lock->is_waiter_for(SimpleLock::WAIT_WR) || lock->is_waiter_for(SimpleLock::WAIT_XLOCK)) * Bug #65612: qa: logrotate fails when state file is already locked * Bug #65614: client: resends request to same MDS it just received a forward from if it does not have an open session with the target * Bug #65616: pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed (RuntimeError: The following counters failed to be set on mds daemons: {'mds_server.req_rmsnap_latency.avgcount'}) * Fix #65617: qa: increase debugging for snap_schedule * Bug #65618: qa: fsstress: cannot execute binary file: Exec format error * Feature #65637: mds: continue sending heartbeats during recovery when MDS journal is large * Bug #65647: Evicted kernel client may get stuck after reconnect * Bug #65657: doc: lack of clarity for explicit placement analogue in yaml spec * Bug #65658: mds: MetricAggregator::ms_can_fast_dispatch2 acquires locks * Bug #65660: mds: drop client metrics during recovery * Bug #65669: QuiesceDB responds with a misleading error to a quiesce-await of a terminated set. * Bug #65678: Cannot use BtreeAllocator for blustore or bluefs * Cleanup #65689: mds: move specialized cleanup for fragment_dir to MDCache::request_cleanup * Cleanup #65690: mds: move specialized cleanup for export_dir to MDCache::request_cleanup * Bug #65700: qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded" in cluster log * Bug #65701: qa: quiesce cache/ops dump not world readable * Bug #65704: mds+valgrind: beacon thread blocked for 60+ seconds * Bug #65705: qa: snaptest-multiple-capsnaps.sh failure * Bug #65716: mds: dir merge can't progress due to fragment nested pins, blocking the quiesce_path and causing a quiesce timeout * Bug #65733: mds: upgrade to MDS enforcing CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK with client having root_squash in any MDS cap causes eviction for all file systems the client has caps for * Feature #65747: common/admin_socket: support saving json output to a file local to the daemon * Bug #65766: qa: perm denied for runing find on cephtest dir * Feature #65769: rgw: make incomplete multipart upload part of bucket check efficient * Bug #65782: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) * Bug #65801: mgr/snap_schedule: restrict retention spec multiplier set * Bug #65802: Quiesce and rename aren't properly syncrhonized * Bug #65803: mds: some asok commands wait with asok thread blocked * Bug #65805: common/StackStringStream: update pointer to newly allocated memory in overflow() * Bug #65820: qa/tasks/fwd_scrub: Traceback in teuthology.log for normal exit condition * Bug #65823: qa/tasks/quiescer: dump ops in parallel * Bug #65829: qa: qa/suites/fs/functional/subvol_versions/ multiplies all jobs in fs:function by 2 * Bug #65837: qa: dead job from waiting to unmount client on deliberately damaged fs * Bug #65841: qa: dead job from `tasks.cephfs.test_admin.TestFSFail.test_with_health_warn_oversize_cache` * Bug #65846: mds: "invalid message type: 501" * Bug #65851: MDS Squid Metadata Performance Regression * Bug #65858: ceph.in: make `ceph tell mds.: help` give help output * Bug #65866: reef: cannot build arrow with CMAKE_BUILD_TYPE=Debug * Documentation #65881: Refer to the disaster recovery and backup consistency as the primary rationale for the subvolume quiesce * Bug #65895: mgr/snap_schedule: correctly fetch mds_max_snaps_per_dir from mds * Bug #65971: read operation hung in Client::get_caps(Same case as issue 65455) * Bug #65976: qa/cephfs: mon.a (mon.0) 1025 : cluster [WRN] application not enabled on pool 'cephfs_data_ec'" in cluster log * Bug #65977: Quiesce times out while the ops dump shows all existing quiesce ops as complete; * Bug #66003: mds: session reclaim could miss blocklisting an old session * Bug #66005: pybind/mgr: allow disabling always on modules (volumes, etc..) * Bug #66009: qa: `fs volume ls` command times out waiting for fs to come online * Bug #66014: mds: Beacon code can deadlock messenger * Bug #66029: qa: enable debug logs for fs:cephadm:multivolume subsuite * Bug #66030: dbench.sh fails with Bad file descriptor (fs:cephadm:multivolume) * Bug #66031: qa: add human readable FS_DEGRADED to ignore list * Bug #66048: mon.smithi001 (mon.0) 332 : cluster [WRN] osd.1 (root=default,host=smithi001) is down" in cluster log * Bug #66049: qa, tasks/nfs: client.15263 isn't responding to mclientcaps(revoke), ino 0x1 pending pAsLsXs issued pAsLsXsFs