Ceph - v17.0.0 Quincy 28% 323 issues (81 closed — 242 open) Related issues Bug #20597: mds: tree exports should be reported at a higher debug level Bug #36273: qa: add background task for some units which drops MDS cache Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing Bug #36593: qa: quota failure caused by clients stepping on each other Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth()) Bug #39651: qa: test_kill_mdstable fails unexpectedly Bug #40159: mds: openfiletable prefetching large amounts of inodes lead to mds start failure Bug #40197: The command 'node ls' sometimes output some incorrect information about mds. Bug #41327: mds: dirty rstat lost during scatter-gather process Bug #42688: Standard CephFS caps do not allow certain dot files to be written Bug #43393: qa: add testing for cephfs-shell on CentOS 8 Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclusive file lock case) Bug #43902: qa: mon_thrash: timeout "ceph quorum_status" Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery) Bug #44988: client: track dirty inodes in a per-session list for effective cap flushing Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed Bug #45538: qa: Fix string/byte comparison mismatch in test_exports Bug #45663: luminous to nautilus upgrade Bug #45664: libcephfs: FAILED LibCephFS.LazyIOMultipleWritersOneReader Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification Bug #46022: qa: test_strays num_purge_ops violates threshold 34/16 Bug #46218: mds: Add inter MDS messages to the corpus and enforce versioning Bug #46357: qa: Error downloading packages Bug #46403: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())" Bug #46438: mds: add vxattr for querying inherited layout Bug #46504: pybind/mgr/volumes: self.assertTrue(check < timo) fails Bug #46507: qa: test_data_scan: "show inode" returns ENOENT Bug #46535: mds: Importer MDS failing right after EImportStart event is journaled, causes incorrect blacklisting of client session Bug #46609: mds: CDir.cc: 956: FAILED ceph_assert(auth_pins == 0) Bug #46648: mds: cannot handle hundreds+ of subtrees Bug #46747: mds: make rstats in CInode::old_inodes stable Bug #46809: mds: purge orphan objects created by lost async file creation Bug #46887: kceph: testing branch: hang in workunit by 1/2 clients during tree export Bug #46902: mds: CInode::maybe_export_pin is broken Bug #47054: mgr/volumes: Handle potential errors in readdir cephfs python binding Bug #47236: Getting "Cannot send after transport endpoint shutdown" after changing subvolume access mode Bug #47276: MDSMonitor: add command to rename file systems Bug #47292: cephfs-shell: test_df_for_valid_file failure Bug #47389: ceph fs volume create fails to create pool Bug #47678: mgr: include/interval_set.h: 466: ceph_abort_msg("abort() called") Bug #47679: kceph: kernel does not open session with MDS importing subtree Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process Bug #47979: qa: test_ephemeral_pin_distribution failure Bug #48075: qa: AssertionError: 12582912 != 'infinite' Bug #48125: qa: test_subvolume_snapshot_clone_cancel_in_progress failure Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client())) Bug #48231: qa: test_subvolume_clone_in_progress_snapshot_rm is racy Bug #48365: qa: ffsb build failure on CentOS 8.2 Bug #48411: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all failed to reach desired subtree state Bug #48422: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_nodeid())) Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client session 4564 (v1:172.21.15.47:0/603539598)" Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS) Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation' Bug #48562: qa: scrub - object missing on disk; some files may be lost Bug #48640: qa: snapshot mismatch during mds thrashing Bug #48678: client: spins on tick interval Bug #48679: client: items pinned in cache preventing unmount Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)" Bug #48700: client: Client::rmdir() may fail to remove a snapshot Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient) Bug #48771: qa: iogen: workload fails to cause balancing Bug #48772: qa: pjd: not ok 9, 44, 80 Bug #48773: qa: scrub does not complete Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure Bug #48830: pacific: qa: :ERROR: test_idempotency Bug #48831: qa: ERROR: test_snapclient_cache Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted Bug #48833: snap_rm hang during osd thrashing Bug #48835: qa: add ms_mode random choice to kclient tests Bug #48873: test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster deployment failed Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings Bug #48886: mds: version MMDSCacheRejoin Bug #48912: ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out inappropriately when stat fails Bug #49074: mds: don't start purging inodes in the middle of recovery Bug #49121: vstart: volumes/nfs interface complaints cluster does not exists Bug #49122: vstart: Rados url error Bug #49133: mgr/nfs: Rook does not support restart of services, handle the NotImplementedError exception raised Bug #49286: fix setting selinux context on file with r/o permissions Bug #49301: mon/MonCap: `fs authorize` generates unparseable cap for file system name containing '-' Bug #49307: nautilus: qa: "RuntimeError: expected fetching path of an pending clone to fail" Bug #49308: nautilus: qa: "AssertionError: expected removing source snapshot of a clone to fail" Bug #49309: nautilus: qa: "Assertion `cb_done' failed." Bug #49318: qa: racy session evicted check Bug #49371: Misleading alarm if all MDS daemons have failed Bug #49379: client: wake up the front pos waiter Bug #49391: qa: run fs:verify with tcmalloc Bug #49419: cephfs-mirror: dangling pointer in PeerReplayer Bug #49458: qa: switch fs:upgrade from nautilus to octopus Bug #49459: pybind/cephfs: DT_REG and DT_LNK values are wrong Bug #49464: qa: rank_freeze prevents failover on some tests Bug #49465: qa: Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_trim_caps' Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" Bug #49469: qa: "AssertionError: expected removing source snapshot of a clone to fail" Bug #49498: qa: "TypeError: update_attrs() got an unexpected keyword argument 'createfs'" Bug #49500: qa: "Assertion `cb_done' failed." Bug #49507: qa: mds removed because trimming for too long with valgrind Bug #49510: qa: file system deletion not complete because starter fs already destroyed Bug #49511: qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'" Bug #49536: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0) Bug #49559: libcephfs: test termination "what(): Too many open files" Bug #49597: mds: mds goes to 'replay' state after setting 'osd_failsafe_ratio' to less than size of data written. Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread Bug #49607: qa: slow metadata ops during scrubbing Bug #49617: mds: race of fetching large dirfrag Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) Bug #49628: mgr/nfs: Support cluster info command for rook Bug #49662: ceph-dokan improvements for additional mounts Bug #49684: qa: fs:cephadm mount does not wait for mds to be created Bug #49711: cephfs-mirror: symbolic links do not get synchronized at times Bug #49719: mon/MDSMonitor: standby-replay daemons should be removed when the flag is turned off Bug #49720: mon/MDSMonitor: do not pointlessly kill standbys that are incompatible with current CompatSet Bug #49725: client: crashed in cct->_conf.get_val() in Client::start_tick_thread() Bug #49736: cephfs-top: missing keys in the client_metadata Bug #49822: test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist Bug #49837: mgr/pybind/snap_schedule: do not fail when no fs snapshots are available Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure Bug #49845: qa: failed umount in test_volumes Bug #49859: Snapshot schedules are not deleted after enabling/disabling snap module Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific Bug #49882: mgr/volumes: setuid and setgid file bits are not retained after a subvolume snapshot restore Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, mv complains "are the same file" Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched Bug #49928: client: items pinned in cache preventing unmount x2 Bug #49936: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024) Bug #49939: cephfs-mirror: be resilient to recreated snapshot during synchronization Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode Bug #49974: cephfs-top: fails with exception "OPENED_FILES" Bug #50005: cephfs-top: flake8 E501 line too long error Bug #50010: qa/cephfs: get_key_from_keyfile() return None when key is not found in keyfile Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" Bug #50019: qa: mount failure with cephadm "probably no MDS server is up?" Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)" Bug #50021: qa: snaptest-git-ceph failure during mon thrashing Bug #50033: mgr/stats: be resilient to offline MDS rank-0 Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts Bug #50048: mds: standby-replay only trims cache when it reaches the end of the replay log Bug #50057: client: openned inodes counter is inconsistent Bug #50060: client: access(path, X_OK) on non-executable file as root always succeeds Bug #50090: client: only check pool permissions for regular files Bug #50112: MDS stuck at stopping when reducing max_mds Bug #50178: qa: "TypeError: run() got an unexpected keyword argument 'shell'" Bug #50215: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'" Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory" Bug #50220: qa: dbench workload timeout Bug #50221: qa: snaptest-git-ceph failure in git diff Bug #50223: qa: "client.4737 isn't responding to mclientcaps(revoke)" Bug #50224: qa: test_mirroring_init_failure_with_recovery failure Bug #50238: mds: ceph.dir.rctime for older snaps is erroneously updated Bug #50246: mds: failure replaying journal (EMetaBlob) Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" Bug #50266: "ceph fs snapshot mirror daemon status" should not use json keys as value Bug #50279: qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" Bug #50281: qa: untar_snap_rm timeout Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs Fix #46885: pybind/mgr/mds_autoscaler: add test for MDS scaling with cephadm Fix #47931: Directory quota optimization Fix #48027: qa: add cephadm tests for CephFS in QA Fix #48683: mds/MDSMap: print each flag value in MDSMap::dump Fix #48802: mds: define CephFS errors that replace standard errno values Fix #49341: qa: add async dirops testing Fix #50045: qa: test standby_replay in workloads Feature #6373: kcephfs: qa: test fscache Feature #7320: qa: thrash directory fragmentation Feature #17434: qa: background rsync task for FS workunits Feature #17835: mds: enable killpoint tests for MDS-MDS subtree export Feature #18154: qa: enable mds thrash exports tests Feature #24725: mds: propagate rstats from the leaf dirs up to the specified diretory Feature #36481: separate out the 'p' mds auth cap into separate caps for quotas vs. choosing pool layout Feature #36483: extend the mds auth cap "path=" syntax to enable something like "path=/foo/bar/*" Feature #36663: mds: adjust cache memory limit automatically via target that tracks RSS Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm Feature #41220: mgr/volumes: add test case for blacklisted clients Feature #41566: mds: support rolling upgrades Feature #42873: mgr/volumes: add GetCapacity API/command for `fs volume` Feature #42874: mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume` Feature #42875: mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolumeCapabilities Feature #44190: qa: thrash file systems during workload tests Feature #44279: client: provide asok commands to getattr an inode with desired caps Feature #44455: cephfs: add recursive unlink RPC Feature #46166: mds: store symlink target as xattr in data pool inode for disaster recovery Feature #46680: pybind/mgr/mds_autoscaler: deploy larger or smaller (RAM) MDS in response to MDS load Feature #46746: mgr/nfs: Add interface to accept yaml file for creating clusters Feature #46865: client: add metric for number of pinned capabilities Feature #46866: kceph: add metric for number of pinned capabilities Feature #47172: mgr/nfs: Add support for RGW export Feature #47264: "fs authorize" subcommand should work for multiple FSs too Feature #47490: Integration of dashboard with volume/nfs module Feature #47587: pybind/mgr/nfs: add Rook support Feature #48394: mds: defer storing the OpenFileTable journal Feature #48404: client: add a ceph.caps vxattr Feature #48509: mds: dmClock based subvolume QoS scheduler Feature #48577: pybind/mgr/volumes: support snapshots on subvolumegroups Feature #48619: client: track (and forward to MDS) average read/write/metadata latency Feature #48682: MDSMonitor: add command to print fs flags Feature #48704: mds: recall caps proportional to the number issued Feature #48791: mds: support file block size Feature #48943: cephfs-mirror: display cephfs mirror instances in `ceph status` command Feature #48944: pybind/mirroring: add subvolume/subvolumegroup interfaces for snapshot mirroring Feature #48953: cephfs-mirror: suppport snapshot mirror of subdirectories and/or ancestors of a mirrored directory Feature #48991: client: allow looking up snapped inodes by inode number+snapid tuple Feature #49040: cephfs-mirror: test mirror daemon with valgrind Feature #49340: libcephfssqlite: library for sqlite interface to CephFS Feature #49493: cephfs-shell: Add 'ln' command and modify other commands to support links. Feature #49619: cephfs-mirror: add mirror peers via bootstrapping Feature #49623: Windows CephFS support - ceph-dokan Feature #49811: mds: collect I/O sizes from client for cephfs-top Feature #49942: cephfs-mirror: enable running in HA Feature #50150: qa: begin grepping kernel logs for kclient warnings/failures to fail a test Feature #50235: allow cephfs-shell to mount named filesystems Cleanup #46802: mds: do not use asserts for RADOS failures Cleanup #50080: mgr/nfs: move nfs code out of volumes plugin Cleanup #50149: client: always register callbacks before mount() Documentation #43034: doc: document large omap warning for directory fragmentation Documentation #45573: doc: client: client_reconnect_stale=1 Documentation #47449: doc: complete ec pool configuration section with an example Documentation #48017: snap-schedule doc Documentation #48914: mgr/nfs: Update about user config Documentation #49372: doc: broken links multimds and kcephfs Documentation #49763: doc: Document mds cap acquisition readdir throttle Documentation #49921: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limitation Documentation #50008: mgr/nfs: Add troubleshooting section Documentation #50161: mgr/nfs: validation error on creating custom export Documentation #50229: cephfs-mirror: update docs with `fs snapshot mirror daemon status` interface