Ceph - v12.2.14 61% 28 issues (17 closed — 11 open) Related issues Bug #49503: standby-replay mds assert failed when replay
Ceph - v14.2.23 53% 30 issues (15 closed — 15 open) Related issues Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
Ceph - v17.0.0 Quincy 54% 876 issues (465 closed — 411 open) Related issues Bug #20597: mds: tree exports should be reported at a higher debug level Bug #36273: qa: add background task for some units which drops MDS cache Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing Bug #36593: qa: quota failure caused by clients stepping on each other Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth()) Bug #39651: qa: test_kill_mdstable fails unexpectedly Bug #40159: mds: openfiletable prefetching large amounts of inodes lead to mds start failure Bug #40197: The command 'node ls' sometimes output some incorrect information about mds. Bug #41327: mds: dirty rstat lost during scatter-gather process Bug #42516: mds: some mutations have initiated (TrackedOp) set to 0 Bug #42688: Standard CephFS caps do not allow certain dot files to be written Bug #43216: MDSMonitor: removes MDS coming out of quorum election Bug #43393: qa: add testing for cephfs-shell on CentOS 8 Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclusive file lock case) Bug #43902: qa: mon_thrash: timeout "ceph quorum_status" Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery) Bug #44988: client: track dirty inodes in a per-session list for effective cap flushing Bug #45145: qa/test_full: failed to open 'large_file_a': No space left on device Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed Bug #45538: qa: Fix string/byte comparison mismatch in test_exports Bug #45663: luminous to nautilus upgrade Bug #45664: libcephfs: FAILED LibCephFS.LazyIOMultipleWritersOneReader Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification Bug #46022: qa: test_strays num_purge_ops violates threshold 34/16 Bug #46075: ceph-fuse: mount -a on already mounted folder should be ignored Bug #46218: mds: Add inter MDS messages to the corpus and enforce versioning Bug #46357: qa: Error downloading packages Bug #46403: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())" Bug #46504: pybind/mgr/volumes: self.assertTrue(check < timo) fails Bug #46507: qa: test_data_scan: "show inode" returns ENOENT Bug #46535: mds: Importer MDS failing right after EImportStart event is journaled, causes incorrect blacklisting of client session Bug #46609: mds: CDir.cc: 956: FAILED ceph_assert(auth_pins == 0) Bug #46648: mds: cannot handle hundreds+ of subtrees Bug #46747: mds: make rstats in CInode::old_inodes stable Bug #46809: mds: purge orphan objects created by lost async file creation Bug #46887: kceph: testing branch: hang in workunit by 1/2 clients during tree export Bug #46902: mds: CInode::maybe_export_pin is broken Bug #47054: mgr/volumes: Handle potential errors in readdir cephfs python binding Bug #47236: Getting "Cannot send after transport endpoint shutdown" after changing subvolume access mode Bug #47276: MDSMonitor: add command to rename file systems Bug #47292: cephfs-shell: test_df_for_valid_file failure Bug #47389: ceph fs volume create fails to create pool Bug #47678: mgr: include/interval_set.h: 466: ceph_abort_msg("abort() called") Bug #47679: kceph: kernel does not open session with MDS importing subtree Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process Bug #47843: mds: stuck in resolve when restarting MDS and reducing max_mds Bug #47979: qa: test_ephemeral_pin_distribution failure Bug #48075: qa: AssertionError: 12582912 != 'infinite' Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client())) Bug #48231: qa: test_subvolume_clone_in_progress_snapshot_rm is racy Bug #48365: qa: ffsb build failure on CentOS 8.2 Bug #48411: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all failed to reach desired subtree state Bug #48422: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_nodeid())) Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client session 4564 (v1:172.21.15.47:0/603539598)" Bug #48473: fs perf stats command crashes Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation' Bug #48562: qa: scrub - object missing on disk; some files may be lost Bug #48673: High memory usage on standby replay MDS Bug #48678: client: spins on tick interval Bug #48679: client: items pinned in cache preventing unmount Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)" Bug #48700: client: Client::rmdir() may fail to remove a snapshot Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient) Bug #48771: qa: iogen: workload fails to cause balancing Bug #48772: qa: pjd: not ok 9, 44, 80 Bug #48773: qa: scrub does not complete Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure Bug #48830: pacific: qa: :ERROR: test_idempotency Bug #48831: qa: ERROR: test_snapclient_cache Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted Bug #48833: snap_rm hang during osd thrashing Bug #48835: qa: add ms_mode random choice to kclient tests Bug #48873: test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster deployment failed Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings Bug #48886: mds: version MMDSCacheRejoin Bug #48912: ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out inappropriately when stat fails Bug #49074: mds: don't start purging inodes in the middle of recovery Bug #49121: vstart: volumes/nfs interface complaints cluster does not exists Bug #49122: vstart: Rados url error Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE", Bug #49133: mgr/nfs: Rook does not support restart of services, handle the NotImplementedError exception raised Bug #49286: fix setting selinux context on file with r/o permissions Bug #49301: mon/MonCap: `fs authorize` generates unparseable cap for file system name containing '-' Bug #49307: nautilus: qa: "RuntimeError: expected fetching path of an pending clone to fail" Bug #49308: nautilus: qa: "AssertionError: expected removing source snapshot of a clone to fail" Bug #49309: nautilus: qa: "Assertion `cb_done' failed." Bug #49318: qa: racy session evicted check Bug #49371: Misleading alarm if all MDS daemons have failed Bug #49379: client: wake up the front pos waiter Bug #49391: qa: run fs:verify with tcmalloc Bug #49419: cephfs-mirror: dangling pointer in PeerReplayer Bug #49458: qa: switch fs:upgrade from nautilus to octopus Bug #49459: pybind/cephfs: DT_REG and DT_LNK values are wrong Bug #49464: qa: rank_freeze prevents failover on some tests Bug #49465: qa: Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_trim_caps' Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" Bug #49469: qa: "AssertionError: expected removing source snapshot of a clone to fail" Bug #49498: qa: "TypeError: update_attrs() got an unexpected keyword argument 'createfs'" Bug #49500: qa: "Assertion `cb_done' failed." Bug #49507: qa: mds removed because trimming for too long with valgrind Bug #49510: qa: file system deletion not complete because starter fs already destroyed Bug #49511: qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'" Bug #49536: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0) Bug #49559: libcephfs: test termination "what(): Too many open files" Bug #49597: mds: mds goes to 'replay' state after setting 'osd_failsafe_ratio' to less than size of data written. Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread Bug #49607: qa: slow metadata ops during scrubbing Bug #49617: mds: race of fetching large dirfrag Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) Bug #49628: mgr/nfs: Support cluster info command for rook Bug #49662: ceph-dokan improvements for additional mounts Bug #49684: qa: fs:cephadm mount does not wait for mds to be created Bug #49711: cephfs-mirror: symbolic links do not get synchronized at times Bug #49719: mon/MDSMonitor: standby-replay daemons should be removed when the flag is turned off Bug #49720: mon/MDSMonitor: do not pointlessly kill standbys that are incompatible with current CompatSet Bug #49725: client: crashed in cct->_conf.get_val() in Client::start_tick_thread() Bug #49736: cephfs-top: missing keys in the client_metadata Bug #49822: test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist Bug #49837: mgr/pybind/snap_schedule: do not fail when no fs snapshots are available Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure Bug #49845: qa: failed umount in test_volumes Bug #49859: Snapshot schedules are not deleted after enabling/disabling snap module Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific Bug #49882: mgr/volumes: setuid and setgid file bits are not retained after a subvolume snapshot restore Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, mv complains "are the same file" Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched Bug #49928: client: items pinned in cache preventing unmount x2 Bug #49936: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024) Bug #49939: cephfs-mirror: be resilient to recreated snapshot during synchronization Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode Bug #49974: cephfs-top: fails with exception "OPENED_FILES" Bug #50005: cephfs-top: flake8 E501 line too long error Bug #50010: qa/cephfs: get_key_from_keyfile() return None when key is not found in keyfile Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" Bug #50019: qa: mount failure with cephadm "probably no MDS server is up?" Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)" Bug #50021: qa: snaptest-git-ceph failure during mon thrashing Bug #50033: mgr/stats: be resilient to offline MDS rank-0 Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts Bug #50048: mds: standby-replay only trims cache when it reaches the end of the replay log Bug #50057: client: openned inodes counter is inconsistent Bug #50060: client: access(path, X_OK) on non-executable file as root always succeeds Bug #50090: client: only check pool permissions for regular files Bug #50091: cephfs-top: exception: addwstr() returned ERR Bug #50112: MDS stuck at stopping when reducing max_mds Bug #50178: qa: "TypeError: run() got an unexpected keyword argument 'shell'" Bug #50215: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'" Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory" Bug #50220: qa: dbench workload timeout Bug #50221: qa: snaptest-git-ceph failure in git diff Bug #50223: qa: "client.4737 isn't responding to mclientcaps(revoke)" Bug #50224: qa: test_mirroring_init_failure_with_recovery failure Bug #50238: mds: ceph.dir.rctime for older snaps is erroneously updated Bug #50246: mds: failure replaying journal (EMetaBlob) Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") Bug #50266: "ceph fs snapshot mirror daemon status" should not use json keys as value Bug #50279: qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" Bug #50281: qa: untar_snap_rm timeout Bug #50298: libcephfs: support file descriptor based *at() APIs Bug #50305: MDS doesn't set fscrypt flag on new inodes with crypto context in xattr buffer Bug #50387: client: fs/snaps failure Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log" Bug #50390: mds: monclient: wait_auth_rotating timed out after 30 Bug #50433: mds: Error ENOSYS: mds.a started profiler Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots Bug #50447: cephfs-mirror: disallow adding a active peered file system back to its source Bug #50495: libcephfs: shutdown race fails with status 141 Bug #50523: Mirroring path "remove" don't not seem to work Bug #50528: pacific: qa: fs:thrash: pjd suite not ok 20 Bug #50530: pacific: client: abort after MDS blocklist Bug #50532: mgr/volumes: hang when removing subvolume when pools are full Bug #50559: session dump includes completed_requests twice, once as an integer and once as a list Bug #50561: cephfs-mirror: incrementally transfer snapshots whenever possible Bug #50622: msg: active_connections regression Bug #50744: mds: journal recovery thread is possibly asserting with mds_lock not locked Bug #50783: mgr/nfs: cli is broken as cluster id and binding arguments are optional Bug #50807: mds: MDSLog::journaler pointer maybe crash with use-after-free Bug #50808: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the second set but not the first: Bug #50819: mon,doc: deprecate min_compat_client Bug #50821: qa: untar_snap_rm failure during mds thrashing Bug #50822: qa: testing kernel patch for client metrics causes mds abort Bug #50823: qa: RuntimeError: timeout waiting for cluster to stabilize Bug #50824: qa: snaptest-git-ceph bus error Bug #50825: qa: snaptest-git-ceph hang during mon thrashing v2 Bug #50834: MDS heartbeat timed out between during executing MDCache::start_files_to_recover() Bug #50840: mds: CephFS kclient gets stuck when getattr() on a certain file Bug #50852: mds: remove fs_name stored in MDSRank Bug #50858: mgr/nfs: skipping conf file or passing empty file throws traceback Bug #50867: qa: fs:mirror: reduced data availability Bug #50868: qa: "kern.log.gz already exists; not overwritten" Bug #50870: qa: test_full: "rm: cannot remove 'large_file_a': Permission denied" Bug #50946: mgr/stats: exception ValueError in perf stats Bug #50976: mds: scrub error on inode 0x1 Bug #50984: qa: test_full multiple the mon_osd_full_ratio twice Bug #51023: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3) Bug #51060: qa: test_ephemeral_pin_distribution failure Bug #51067: mds: segfault printing unknown metric Bug #51069: mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush Bug #51077: MDSMonitor: crash when attempting to mount cephfs Bug #51113: mds: unknown metric type is always -1 Bug #51146: qa: scrub code does not join scrubopts with comma Bug #51182: pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' Bug #51184: qa: fs:bugs does not specify distro Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details Bug #51204: cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror service status is misleading Bug #51228: qa: rmdir: failed to remove 'a/.snap/*': No such file or directory Bug #51229: qa: test_multi_snap_schedule list difference failure Bug #51250: qa: fs:upgrade uses teuthology default distro Bug #51256: pybind/mgr/volumes: purge queue seems to block operating on cephfs connection required by dispatch thread Bug #51271: mgr/volumes: use a dedicated libcephfs handle for subvolume API calls Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)" Bug #51281: qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'" Bug #51318: cephfs-mirror: do not terminate on SIGHUP Bug #51357: osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS crash Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch) Bug #51417: qa: test_ls_H_prints_human_readable_file_size failure Bug #51476: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mirror daemon is always running Bug #51495: client: handle empty path strings Bug #51589: mds: crash when journaling during replay Bug #51600: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate are not updated Bug #51630: mgr/snap_schedule: don't throw traceback on non-existent fs Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4 Bug #51705: qa: tasks.cephfs.fuse_mount:mount command failed Bug #51707: pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale ones Bug #51722: mds: slow performance on parallel rm operations for multiple kclients Bug #51756: crash: std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tree_node_base*, std::_Rb_tree_node_base&) Bug #51757: crash: /lib/x86_64-linux-gnu/libpthread.so.0( Bug #51789: mgr/nfs: allow deployment of multiple nfs-ganesha daemons on single host Bug #51795: mgr/nfs:update pool name to '.nfs' in vstart.sh Bug #51800: mgr/nfs: create rgw export with vstart Bug #51805: pybind/mgr/volumes: The cancelled clone still goes ahead and complete the clone Bug #51870: pybind/mgr/volumes: first subvolume permissions set perms on /volumes and /volumes/group Bug #51905: qa: "error reading sessionmap 'mds1_sessionmap'" Bug #51956: mds: switch to use ceph_assert() instead of assert() Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure Bug #51975: pybind/mgr/stats: KeyError Bug #51989: cephfs-mirror: cephfs-mirror daemon status for a particular FS is not showing Bug #52062: cephfs-mirror: terminating a mirror daemon can cause a crash at times Bug #52094: Tried out Quincy: All MDS Standby Bug #52123: mds sends cap updates with btime zeroed out Bug #52382: mds,client: add flag to MClientSession for reject reason Bug #52430: mds: fast async create client mount breaks racy test Bug #52437: mds: InoTable::replay_release_ids abort via test_inotable_sync Bug #52438: qa: ffsb timeout Bug #52439: qa: acls does not compile on centos stream Bug #52487: pacific: qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragmentation) Bug #52508: nfs-ganesha crash when calls libcephfs, it triggers __ceph_assert_fail Bug #52531: Quotas smaller than 4MB on subdirs do not have any effect Bug #52565: MDSMonitor: handle damaged state from standby-replay Bug #52572: "cluster [WRN] 1 slow requests" in smoke pacific Bug #52581: Dangling fs snapshots on data pool after change of directory layout Bug #52606: qa: test_dirfrag_limit Bug #52607: qa: "mon.a (mon.0) 1022 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY)" Bug #52625: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) Bug #52626: mds: ScrubStack.cc: 831: FAILED ceph_assert(diri) Bug #52642: snap scheduler: cephfs snapshot schedule status doesn't list the snapshot count properly Bug #52677: qa: test_simple failure Bug #52688: mds: possibly corrupted entry in journal (causes replay failure with file system marked as damaged) Bug #52820: Ceph monitor crash after upgrade from ceph 15.2.14 to 16.2.6 Bug #52821: qa/xfstest-dev.py: update to include centos stream Bug #52822: qa: failed pacific install on fs:upgrade Bug #52874: Monitor might crash after upgrade from ceph to 16.2.6 Bug #52887: qa: Test failure: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable) Bug #52949: RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'} Bug #52975: MDSMonitor: no active MDS after cluster deployment Bug #52994: client: do not defer releasing caps when revoking Bug #52995: qa: test_standby_count_wanted failure Bug #52996: qa: test_perf_counters via test_openfiletable Bug #53043: qa/vstart_runner: tests crashes due incompatiblity Bug #53045: stat->fsid is not unique among filesystems exported by the ceph server Bug #53074: pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active Bug #53082: ceph-fuse: segmenetation fault in Client::handle_mds_map Bug #53126: In the 5.4.0 kernel, the mount of ceph-fuse fails Bug #53150: pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5 Bug #53155: MDSMonitor: assertion during upgrade to v16.2.5+ Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations Bug #53194: mds: opening connection to up:replay/up:creating daemon causes message drop Bug #53214: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory" Bug #53216: qa: "RuntimeError: value of attributes should be either str or None. client_id" Bug #53246: rhel 8.4 and centos stream unable to install cephfs-java Bug #53293: qa: v16.2.4 mds crash caused by centos stream kernel Bug #53436: mds, mon: mds beacon messages get dropped? (mds never reaches up:active state) Bug #53487: qa: mount error 22 = Invalid argument Bug #53509: quota support for subvolumegroup Bug #53520: mds: put both fair mutex MDLog::submit_mutex and mds_lock to test under heavy load Bug #53521: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin Bug #53542: Ceph Metadata Pool disk throughput usage increasing Bug #53573: qa: test new clients against older Ceph clusters Bug #53611: mds,client: can not identify pool id if pool name is positive integer when set layout.pool Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy" Bug #53619: mds: fails to reintegrate strays if destdn's directory is full (ENOSPC) Bug #53623: mds: LogSegment will only save one ESubtreeMap event if the ESubtreeMap event size is large enough. Bug #53641: mds: recursive scrub does not trigger stray reintegration Bug #53724: mds: stray directories are not purged when all past parents are clear Bug #53726: mds: crash when `ceph tell mds.0 dump tree ''` Bug #53741: crash just after MDS become active Bug #53750: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock)) Bug #53753: mds: crash (assert hit) when merging dirfrags Bug #53765: mount helper mangles the new syntax device string by qualifying the name Bug #53805: mds: seg fault in expire_recursive Bug #53811: standby-replay mds is removed from MDSMap unexpectedly Bug #53857: qa: fs:upgrade test fails mds count check Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) Bug #53862: mds: remove the duplicated or incorrect respond when the pool is full Bug #53911: client: client session state stuck in opening and hang all the time Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs Fix #46885: pybind/mgr/mds_autoscaler: add test for MDS scaling with cephadm Fix #47931: Directory quota optimization Fix #48027: qa: add cephadm tests for CephFS in QA Fix #48683: mds/MDSMap: print each flag value in MDSMap::dump Fix #48802: mds: define CephFS errors that replace standard errno values Fix #49341: qa: add async dirops testing Fix #50045: qa: test standby_replay in workloads Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread Fix #51276: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-op case Fix #51857: client: make sure only to update dir dist from auth mds Fix #52068: qa: add testing for "ms_mode" mount option Fix #52104: qa: add testing for "copyfrom" mount option Fix #52386: client: fix dump mds twice Fix #52591: mds: mds_oft_prefetch_dirfrags = false is not qa tested Fix #52824: qa: skip internal metadata directory when scanning ceph debugfs directory Fix #52916: mds,client: formally remove inline data support Feature #1276: client: expose mds partition via virtual xattrs Feature #6373: kcephfs: qa: test fscache Feature #7320: qa: thrash directory fragmentation Feature #10679: Add support for the chattr +i command (immutable file) Feature #16745: mon: prevent allocating snapids allocated for CephFS Feature #17434: qa: background rsync task for FS workunits Feature #17835: mds: enable killpoint tests for MDS-MDS subtree export Feature #18154: qa: enable mds thrash exports tests Feature #24462: MDSMonitor: check for mixed version MDS Feature #24725: mds: propagate rstats from the leaf dirs up to the specified diretory Feature #36663: mds: adjust cache memory limit automatically via target that tracks RSS Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm Feature #41220: mgr/volumes: add test case for blacklisted clients Feature #41566: mds: support rolling upgrades Feature #42873: mgr/volumes: add GetCapacity API/command for `fs volume` Feature #42874: mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume` Feature #42875: mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolumeCapabilities Feature #44279: client: provide asok commands to getattr an inode with desired caps Feature #44455: cephfs: add recursive unlink RPC Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery Feature #46680: pybind/mgr/mds_autoscaler: deploy larger or smaller (RAM) MDS in response to MDS load Feature #46746: mgr/nfs: Add interface to accept yaml file for creating clusters Feature #46865: client: add metric for number of pinned capabilities Feature #46866: kceph: add metric for number of pinned capabilities Feature #47172: mgr/nfs: Add support for RGW export Feature #47264: "fs authorize" subcommand should work for multiple FSs too Feature #47490: Integration of dashboard with volume/nfs module Feature #47587: pybind/mgr/nfs: add Rook support Feature #48394: mds: defer storing the OpenFileTable journal Feature #48404: client: add a ceph.caps vxattr Feature #48509: mds: dmClock based subvolume QoS scheduler Feature #48577: pybind/mgr/volumes: support snapshots on subvolumegroups Feature #48619: client: track (and forward to MDS) average read/write/metadata latency Feature #48682: MDSMonitor: add command to print fs flags Feature #48704: mds: recall caps proportional to the number issued Feature #48736: qa: enable debug loglevel kclient test suits Feature #48791: mds: support file block size Feature #48943: cephfs-mirror: display cephfs mirror instances in `ceph status` command Feature #48944: pybind/mirroring: add subvolume/subvolumegroup interfaces for snapshot mirroring Feature #48953: cephfs-mirror: suppport snapshot mirror of subdirectories and/or ancestors of a mirrored directory Feature #48991: client: allow looking up snapped inodes by inode number+snapid tuple Feature #49040: cephfs-mirror: test mirror daemon with valgrind Feature #49340: libcephfssqlite: library for sqlite interface to CephFS Feature #49619: cephfs-mirror: add mirror peers via bootstrapping Feature #49623: Windows CephFS support - ceph-dokan Feature #49811: mds: collect I/O sizes from client for cephfs-top Feature #49942: cephfs-mirror: enable running in HA Feature #50235: allow cephfs-shell to mount named filesystems Feature #50372: test: Implement cephfs-mirror trasher test for HA active/active Feature #50448: cephfs-mirror: easy repeering Feature #50449: mgr/nfs: Add unit tests for conf parser and others Feature #50470: cephfs-top: multiple file system support Feature #50581: cephfs-mirror: allow mirror daemon to connect to local/primary cluster via monitor address Feature #51062: mds,client: suppport getvxattr RPC Feature #51162: mgr/volumes: `fs volume rename` command Feature #51265: mgr/nfs: add interface to create exports from json file Feature #51332: qa: increase metadata replication to exercise lock/witness code paths more Feature #51333: qa: use cephadm to provision cephfs for fs:workloads Feature #51340: mon/MDSMonitor: allow creating a file system with a specific fscid Feature #51416: kclient: add debugging for mds failover events Feature #51434: pybind/mgr/volumes: add basic introspection Feature #51518: client: flush the mdlog in unsafe requests' relevant and auth MDSes only Feature #51613: mgr/nfs: add qa tests for rgw Feature #51615: mgr/nfs: add interface to update nfs cluster Feature #51716: Add option in `fs new` command to start rank 0 in failed state Feature #51787: mgr/nfs: deploy nfs-ganesha daemons on non-default port Feature #52491: mds: add max_mds_entries_per_dir config option Feature #52720: mds: mds_bal_rank_mask config option Feature #52725: qa: mds_dir_max_entries workunit test case Feature #52942: mgr/nfs: add 'nfs cluster config get' Feature #53228: cephfs/quota: Set a limit on minimum quota setting Feature #53310: Add admin socket command to trim caps Feature #53730: ceph-fuse: suppor "entry_timeout" and "attr_timeout" options for improve performance Feature #53903: mount: add option to support fake mounts Feature #55126: mds: add perf counter to record slow replies Cleanup #26960: mds: avoid modification of const Messages Cleanup #37931: MDSMonitor: rename `mds repaired` to `fs repaired` Cleanup #46802: mds: do not use asserts for RADOS failures Cleanup #50080: mgr/nfs: move nfs code out of volumes plugin Cleanup #50149: client: always register callbacks before mount() Cleanup #50450: mgr/nfs: Simplify the parsing of Ganesha Conf using existing pseudo-parsers Cleanup #50816: mgr/nfs: add nfs to mypy Cleanup #51379: mgr/volumes: add flake8 test Cleanup #51380: mgr/volumes/module.py: fix various flake8 issues Cleanup #51381: mgr/volumes/fs/async_job.py: fix various flake8 issues Cleanup #51382: mgr/volumes/fs/async_cloner.py: fix various flake8 issues Cleanup #51383: mgr/volumes/fs/exception.py: fix various flake8 issues Cleanup #51384: mgr/volumes/fs/vol_spec.py: fix various flake8 issues Cleanup #51385: mgr/volumes/fs/fs_util.py: add extra blank line Cleanup #51386: mgr/volumes/fs/volume.py: fix various flake8 issues Cleanup #51387: mgr/volumes/fs/purge_queue.py: add extra blank line Cleanup #51388: mgr/volumes/fs/operations/index.py: add extra blank line Cleanup #51389: mgr/volumes/fs/operations/rankevicter.py: fix various flake8 issues Cleanup #51390: mgr/volumes/fs/operations/access.py: fix various flake8 issues Cleanup #51391: mgr/volumes/fs/operations/resolver.py: add extra blank line Cleanup #51392: mgr/volumes/fs/operations/snapshot_util.py: add extra blank line Cleanup #51393: mgr/volumes/fs/operations/group.py: add extra blank line Cleanup #51394: mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues Cleanup #51395: mgr/volumes/fs/operations/lock.py: fix various flake8 issues Cleanup #51396: mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues Cleanup #51397: mgr/volumes/fs/operations/volume.py: fix various flake8 issues Cleanup #51398: mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues Cleanup #51399: mgr/volumes/fs/operations/template.py: fix various flake8 issues Cleanup #51400: mgr/volumes/fs/operations/trash.py: fix various flake8 issues Cleanup #51401: mgr/volumes/fs/operations/versions/metadata_manager.py: fix various flake8 issues Cleanup #51402: mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flake8 issues Cleanup #51403: mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake8 issues Cleanup #51405: mgr/volumes/fs/operations/versions/subvolume_v2.py: fix various flake8 issues Cleanup #51406: mgr/volumes/fs/operations/versions/op_sm.py: fix various flake8 issues Cleanup #51407: mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various flake8 issues Cleanup #51543: mds: improve debugging for mksnap denial Cleanup #51614: mgr/nfs: remove dashboard test remnant from unit tests Cleanup #51651: mgr/volumes: replace mon_command with check_mon_command Cleanup #52274: mgr/nfs: add more log messages Cleanup #52723: mds: improve mds_bal_fragment_size_max config option Documentation #43034: doc: document large omap warning for directory fragmentation Documentation #45573: doc: client: client_reconnect_stale=1 Documentation #47449: doc: complete ec pool configuration section with an example Documentation #48017: snap-schedule doc Documentation #48914: mgr/nfs: Update about user config Documentation #49372: doc: broken links multimds and kcephfs Documentation #49763: doc: Document mds cap acquisition readdir throttle Documentation #49921: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limitation Documentation #50008: mgr/nfs: Add troubleshooting section Documentation #50161: mgr/nfs: validation error on creating custom export Documentation #50229: cephfs-mirror: update docs with `fs snapshot mirror daemon status` interface Documentation #50865: doc: move mds state diagram .dot into rst Documentation #50904: mgr/nfs: add nfs-ganesha config hierarchy Documentation #51187: doc: pacific updates Documentation #51428: mgr/nfs: move nfs doc from cephfs to mgr Documentation #51459: doc: document what kinds of damage forward scrub can repair Documentation #51683: mgr/nfs: add note about creating exports for nfs using vstart to developer guide Documentation #53004: Improve API documentation for struct ceph_client_callback_args Documentation #53236: doc: ephemeral pinning with subvolumegroups
Ceph - v18.0.0 R 25% 160 issues (40 closed — 120 open) Related issues Bug #23724: qa: broad snapshot functionality testing across clients Bug #24894: client: allow overwrites to files with size greater than the max_file_size config Bug #46438: mds: add vxattr for querying inherited layout Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... Bug #51278: mds: "FAILED ceph_assert(!segments.empty())" Bug #52982: client: Inode::hold_caps_until should be a time from a monotonic clock Bug #53504: client: infinite loop "got ESTALE" after mds recovery Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version()) Bug #53979: mds: defer prefetching the dirfrags to speed up MDS rejoin Bug #53996: qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or quincy instead of pacific Bug #54017: Problem with ceph fs snapshot mirror and read-only folders Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps Bug #54049: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expected to add in /proc/self/mounts and command should return failure Bug #54052: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart Bug #54066: mgr/volumes: uid/gid of the clone is incorrect Bug #54081: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v16.2.4 -> v16.2.[567] Bug #54106: kclient: hang during workunit cleanup Bug #54107: kclient: hang during umount Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" Bug #54111: data pool attached to a file system can be attached to another file system Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs) Bug #54345: mds: try to reset heartbeat when fetching or committing. Bug #54384: mds: crash due to seemingly unrecoverable metadata error Bug #54459: fs:upgrade fails with "hit max job timeout" Bug #54460: snaptest-multiple-capsnaps.sh test failure Bug #54461: ffsb.sh test failure Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created and deleted Bug #54557: scrub repair does not clear earlier damage health status Bug #54560: snap_schedule: avoid throwing traceback for bad or missing arguments Bug #54606: check-counter task runs till max job timeout Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression? Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDRequestRef&): assert(dnl->get_inode() == in) Bug #54760: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null()) Bug #54971: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) Bug #54976: mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery) Bug #55110: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to older kernel Bug #55112: cephfs-shell: saving files doesn't work as expected Bug #55134: ceph pacific fails to perform fs/mirror test Bug #55148: snap_schedule: remove subvolume(-group) interfaces Bug #55165: client: validate pool against pool ids as well as pool names Bug #55170: mds: crash during rejoin (CDir::fetch_keys) Bug #55173: qa: missing dbench binary? Bug #55196: mgr/stats: perf stats command doesn't have filter option for fs names. Bug #55216: cephfs-shell: creates directories in local file system even if file not found Bug #55217: pybind/mgr/volumes: Clone operation hangs Bug #55234: snap_schedule: replace .snap with the client configured snap dir name Bug #55236: qa: fs/snaps tests fails with "hit max job timeout" Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS Bug #55242: cephfs-shell: put command should accept both path mandatorily and validate local_path Bug #55313: Unexpected file access behavior using ceph-fuse Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MDSes) Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) Bug #55464: cephfs: mds/client error when client stale reconnect Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)" Bug #55537: mds: crash during fs:upgrade test Bug #55538: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel" Bug #55620: ceph pacific fails to perform fs/multifs test Bug #55710: cephfs-shell: exit code unset when command has missing argument Bug #55725: MDS allows a (kernel) client to exceed the xattrs key/value limits Bug #55759: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes Bug #55762: mgr/volumes: Handle internal metadata directories under '/volumes' properly. Bug #55778: client: choose auth MDS for getxattr with the Xs caps Bug #55779: fuse client losing connection to mds Bug #55807: qa failure: workload iogen failed Bug #55822: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command Bug #55824: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536 Bug #55842: Upgrading to 16.2.9 with 9M strays files causes MDS OOM Bug #55858: Pacific 16.2.7 MDS constantly crashing Bug #55861: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metrics.TestMDSMetrics) Bug #55897: test_nfs: update of export's access type should not trigger NFS service restart Bug #55971: LibRadosMiscConnectFailure.ConnectFailure test failure Bug #55980: mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5 Bug #56003: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0) Bug #56010: xfstests-dev generic/444 test failed Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison Bug #56012: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) Bug #56063: Snapshot retention config lost after mgr restart Bug #56067: Cephfs data loss with root_squash enabled Bug #56116: mds: handle deferred client request core when mds reboot Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_SCAN) Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) Fix #54317: qa: add testing in fs:workload for different kinds of subvolumes Feature #41824: mds: aggregate subtree authorities for display in `fs top` Feature #50150: qa: begin grepping kernel logs for kclient warnings/failures to fail a test Feature #54237: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path (in msg) while raising exception from opendir() in cephfs.pyx Feature #54374: mgr/snap_schedule: include timezone information in scheduled snapshots Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes Feature #55041: mgr/volumes: display in-progress clones for a snapshot Feature #55121: cephfs-top: new options to limit and order-by Feature #55214: mds: add asok/tell command to clear stale omap entries Feature #55215: mds: fragment directory snapshots Feature #55401: mgr/volumes: allow users to add metadata (key-value pairs) for subvolume snapshot Feature #55414: mds:asok interface to cleanup permanently damaged inodes Feature #55463: cephfs-top: allow users to chose sorting order Feature #55470: qa: postgresql test suite workunit Feature #55715: pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds Feature #55821: pybind/mgr/volumes: interface to check the presence of subvolumegroups/subvolumes. Feature #55940: quota: accept values in human readable format as well Feature #56058: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags Feature #56140: cephfs: tooling to identify inode (metadata) corruption Feature #56442: mds: build asok command to dump stray files and associated caps Cleanup #54362: client: do not release the global snaprealm until unmounting Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work