Activity
From 09/03/2019 to 10/02/2019
10/02/2019
- 01:16 PM Backport #42162 (Rejected): mimic: qa: add testing for lazyio
- 01:16 PM Backport #42161 (Resolved): nautilus: qa: add testing for lazyio
- https://github.com/ceph/ceph/pull/30769
- 01:14 PM Backport #42160 (Resolved): luminous: osdc: objecter ops output does not have useful time informa...
- https://github.com/ceph/ceph/pull/33294
- 01:14 PM Backport #42159 (Resolved): mimic: osdc: objecter ops output does not have useful time information
- https://github.com/ceph/ceph/pull/31384
- 01:14 PM Backport #42158 (Resolved): nautilus: osdc: objecter ops output does not have useful time informa...
- https://github.com/ceph/ceph/pull/31081
- 01:14 PM Backport #42157 (Rejected): nautilus: cephfs-shell: rmdir doesn't complain when directory is not ...
- https://github.com/ceph/ceph/pull/31080
- 01:12 PM Backport #42156 (Resolved): mimic: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31284
- 01:12 PM Backport #42155 (Resolved): nautilus: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31079
- 01:10 PM Backport #42149 (Resolved): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- https://github.com/ceph/ceph/pull/30768
- 01:10 PM Backport #42148 (Resolved): mimic: mds: mds returns -5 error when the deleted file does not exist
- https://github.com/ceph/ceph/pull/31381
- 01:10 PM Backport #42147 (Resolved): nautilus: mds: mds returns -5 error when the deleted file does not exist
- https://github.com/ceph/ceph/pull/30767
- 01:10 PM Backport #42146 (Resolved): mimic: client: return error when someone passes bad whence value to l...
- https://github.com/ceph/ceph/pull/31380
- 01:10 PM Backport #42145 (Resolved): nautilus: client: return error when someone passes bad whence value t...
- https://github.com/ceph/ceph/pull/30766
- 01:10 PM Backport #42143 (Resolved): mimic: mds:split the dir if the op makes it oversized, because some o...
- https://github.com/ceph/ceph/pull/31379
- 01:10 PM Backport #42142 (Resolved): nautilus: mds:split the dir if the op makes it oversized, because som...
- https://github.com/ceph/ceph/pull/31302
- 01:08 PM Backport #42130 (Resolved): mimic: doc/ceph-fuse: -k missing in man page
- https://github.com/ceph/ceph/pull/30936
- 01:08 PM Backport #42129 (Resolved): nautilus: doc/ceph-fuse: -k missing in man page
- https://github.com/ceph/ceph/pull/30765
- 01:07 PM Backport #42123 (Resolved): luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/33293
- 01:07 PM Backport #42122 (Resolved): mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/30918
- 01:07 PM Backport #42121 (Resolved): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/30764
- 12:42 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
- 11:10 AM Documentation #41952 (Resolved): doc: cleanup CephFS landing page
- 11:03 AM Backport #41508: nautilus: add information about active scrubs to "ceph -s" (and elsewhere)
- Venky, please do this backport.
- 10:31 AM Bug #41892 (Resolved): qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- nautilus backport being handled via #16656 - nautilus backport PR is https://github.com/ceph/ceph/pull/30521
- 07:24 AM Bug #42117 (Need More Info): MDS: daemon and cephfs-data-scan dump core on (probably) damaged oma...
- This was observed with ceph-12.2.10, but afaict the code path hasn't changed.
The root cause is not definitive, bu... - 04:24 AM Bug #42107 (Pending Backport): client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
10/01/2019
- 02:05 PM Bug #42107 (Resolved): client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- There is no method to handle SEEK_HOLE and SEEK_DATA in lseek in ceph-fuse
- 02:03 PM Tasks #39998: client: audit ACL
- https://tracker.ceph.com/issues/17594#note-37
- 10:02 AM Bug #42101 (Resolved): test_cephfs_shell: test_help doesn't test help
- The test runs command help without any arguments prints list of commands instead of help text. Pass "all" to help ins...
- 08:22 AM Bug #40864 (Pending Backport): cephfs-shell: rmdir doesn't complain when directory is not empty
- 08:18 AM Cleanup #42043 (Resolved): mds: reorg MDBalancer header
- 08:16 AM Bug #41871 (Pending Backport): client: return error when someone passes bad whence value to llseek
- 07:45 AM Bug #42100 (Resolved): cephfs-shell: always returns zero, even when a command has failed
- 07:40 AM Bug #42096 (Fix Under Review): mgr/volumes: creating subvolume and subvolume group snapshot fails
- 07:36 AM Bug #41841 (Pending Backport): mgr/volumes: missing protection for `fs volume rm` command
- 04:01 AM Documentation #40957 (Fix Under Review): doc: add section to manpage for recover_session= option
- 03:54 AM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> I've started working on patches to add this, but I see a potential problem. The idea is to dele... - 02:36 AM Backport #40131 (Resolved): nautilus: Document behaviour of fsync-after-close
09/30/2019
- 01:53 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I've started working on patches to add this, but I see a potential problem. The idea is to delegate a range of inodes...
- 01:36 PM Bug #42096 (Resolved): mgr/volumes: creating subvolume and subvolume group snapshot fails
- ...
- 12:49 PM Tasks #42085: qa: create tests for new recover_session=clean option
- Note that this can only be run against the testing kernel.
- 12:38 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> > If a directory inode is ephemerally pinned, then we just note that in the inode as... - 10:31 AM Feature #41302: mds: add ephemeral random and distributed export pins
> If a directory inode is ephemerally pinned, then we just note that in the inode as a boolean flag. It remains pin...- 03:53 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> Patrick Donnelly wrote:
> > Sidharth Anupkrishnan wrote:
> > > At a first look, I... - 11:27 AM Bug #41871 (In Progress): client: return error when someone passes bad whence value to llseek
- 11:26 AM Documentation #40957 (In Progress): doc: add section to manpage for recover_session= option
- 04:07 AM Documentation #41783 (Resolved): doc: document MDSs journaling mechanism and metadata pool
- 04:04 AM Feature #41910 (Resolved): qa: allow vstart_runner to perform tests on kclient mounts
09/27/2019
- 07:19 PM Bug #42088 (Resolved): 'ceph -s' does not show standbys if there are no filesystems
- - start up mon, mgr, osd
- start up mds (or two)
- but do not create a file system...
ceph -s... - 04:04 PM Tasks #42085 (Resolved): qa: create tests for new recover_session=clean option
- Add new tests in ceph/qa to test the new recover_session=clean mount option in kcephfs, and set them up to run in teu...
- 06:51 AM Documentation #41872 (Resolved): doc: update CephFS Quick Start guide
- 06:23 AM Documentation #42044 (Pending Backport): doc/ceph-fuse: -k missing in man page
- 06:20 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Patrick Donnelly wrote:
> Sidharth Anupkrishnan wrote:
> > At a first look, I think there is no need to make ephem... - 04:48 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> Sidharth Anupkrishnan wrote:
> > At a first look, I think there is no need to make... - 04:45 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> At a first look, I think there is no need to make ephemeral_export_random_pin an xat... - 05:37 AM Bug #42057 (In Progress): cephfs-shell: not compatible with cmd2 versions after 0.9.13
09/26/2019
- 06:07 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> At a first look, I think there is no need to make ephemeral_export_random_pin an xat... - 04:01 PM Feature #41302: mds: add ephemeral random and distributed export pins
- At a first look, I think there is no need to make ephemeral_export_random_pin an xattr like export_pin because for th...
- 01:27 PM Bug #40283 (Pending Backport): qa: add testing for lazyio
- 01:25 PM Bug #40821 (Pending Backport): osdc: objecter ops output does not have useful time information
- 01:23 PM Bug #41434 (Pending Backport): mds: infinite loop in Locker::file_update_finish()
- 01:22 PM Bug #41880 (Pending Backport): mds:split the dir if the op makes it oversized, because some ops m...
- 01:20 PM Cleanup #41678 (Resolved): mds: reorg LogSegment header
- 01:19 PM Bug #41868 (Pending Backport): mds: mds returns -5 error when the deleted file does not exist
- 01:17 PM Bug #41892 (Pending Backport): qa: convert kcephfs qa tests to use mount.ceph auto-discovery feat...
- 10:41 AM Bug #42062 (Resolved): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- ...
- 10:34 AM Bug #42061 (Won't Fix): volume_client: AssertionError: 237 != 8
- ...
- 10:05 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> > I plan to go a step further and not permit a tracker ticket to go backwards like this, i.e.... - 08:06 AM Bug #42057 (Resolved): cephfs-shell: not compatible with cmd2 versions after 0.9.13
- "-b" options fail since load command from cmd2 changed t run_script.
09/25/2019
- 09:36 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Oh, and one more thing: issues in Resolved status can be reverted to Need Review (or In Progress, or even New) as well.
- 09:29 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- > I plan to go a step further and not permit a tracker ticket to go backwards like this, i.e. from PB back to NR. Ins...
- 06:16 AM Documentation #42044 (Resolved): doc/ceph-fuse: -k missing in man page
- 05:47 AM Cleanup #42043 (Fix Under Review): mds: reorg MDBalancer header
- 05:40 AM Cleanup #42043 (Resolved): mds: reorg MDBalancer header
09/24/2019
- 07:54 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30763
- 07:54 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30934
- 07:54 PM Backport #42038 (Resolved): mimic: client: _readdir_cache_cb() may use the readdir_cache already ...
- https://github.com/ceph/ceph/pull/30933
- 07:52 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30762
- 07:52 PM Backport #42034 (Resolved): mimic: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30932
- 06:05 PM Bug #42020 (Fix Under Review): qa: fuse_mount should check if mounted in umount_wait
- 08:20 AM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- ...
- 11:32 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
- 11:30 AM Bug #41148 (Pending Backport): client: _readdir_cache_cb() may use the readdir_cache already clear
- 11:16 AM Cleanup #41665 (Resolved): mds: reorg Locker header
- 11:11 AM Bug #41837 (Pending Backport): client: lseek function does not return the correct value.
- 08:34 AM Bug #42022 (Need More Info): mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to...
- ...
- 07:03 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate)... - 03:04 AM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
- 03:00 AM Documentation #41872 (Fix Under Review): doc: update CephFS Quick Start guide
- 01:47 AM Documentation #42016 (Resolved): doc: layout rest of intro page
- Include links to different sections of CephFS Documentation: "Concepts" (architecture), "Getting Started", "Mounting"...
09/23/2019
- 05:30 PM Backport #41890 (In Progress): nautilus: mount.ceph: enable consumption of ceph keyring files
- 04:19 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- Should pick up backport for https://tracker.ceph.com/issues/41892 as well once it's merged.
- 05:16 PM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate) the Pull request ID fie...
- 01:59 PM Documentation #41999 (Resolved): CephFS Documentation Sprint 2
- 01:24 PM Documentation #41952 (In Progress): doc: cleanup CephFS landing page
- 07:35 AM Documentation #41952 (Resolved): doc: cleanup CephFS landing page
- Remove or move links on CephFS landing page to Table of Contents on the left
- 12:53 PM Bug #41337 (Resolved): mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/sub...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41371 (Resolved): mgr/volumes: subvolume and subvolume group path exists even when creation ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41617 (Resolved): mgr/volumes: prevent negative subvolume size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:23 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Backport of follow-on fix: https://github.com/ceph/ceph/pull/30508
- 12:17 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- 12:16 PM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 12:16 PM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 12:16 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
- 12:16 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
- 06:59 AM Feature #41842 (Fix Under Review): mgr/volumes: list FS subvolumes, subvolume groups, and their s...
- 06:57 AM Documentation #40689 (Resolved): mgr/volumes: document mgr fs volumes CLI
- 06:50 AM Cleanup #41951 (Resolved): mds: obsolete mds_cache_size
- mds_cache_memory_limit is preferred. Remove last bits of support for mds_cache_size.
- 05:06 AM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Should also backport https://tracker.ceph.com/issues/41899
- 04:22 AM Feature #41910 (Fix Under Review): qa: allow vstart_runner to perform tests on kclient mounts
- 04:17 AM Bug #41892 (Fix Under Review): qa: convert kcephfs qa tests to use mount.ceph auto-discovery feat...
- 02:11 AM Bug #41935 (Duplicate): ceph mdss keep on crashing
- 02:04 AM Bug #41948 (Fix Under Review): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode d...
09/22/2019
- 07:34 AM Bug #41935: ceph mdss keep on crashing
- https://tracker.ceph.com/issues/41948
- 06:52 AM Bug #41948 (Resolved): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not ...
- backport #40445 incomplete
- 05:06 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
09/20/2019
- 12:43 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- At a high level, here's what I think we need to do:
Add a new delegated_inos field to session_info_t in the MDS co... - 09:59 AM Backport #41933 (In Progress): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hang...
- https://github.com/ceph/ceph/pull/29926
09/19/2019
- 03:42 PM Bug #41935: ceph mdss keep on crashing
- Looks like the backport of https://tracker.ceph.com/issues/39987 to nautilus was incomplete, it's missing https://git...
- 02:23 PM Bug #41935: ceph mdss keep on crashing
- ...
- 02:21 PM Bug #41935 (Duplicate): ceph mdss keep on crashing
- I updated ceph to 14.2.3 yesterday. everything was running fine, but today mds start crashing. I tried restarting all...
- 01:30 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41903 (Pending Backport): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 10:10 AM Bug #40283 (Fix Under Review): qa: add testing for lazyio
09/18/2019
- 02:12 PM Feature #41910 (Resolved): qa: allow vstart_runner to perform tests on kclient mounts
- Add a new --kclient switch to vstart_runner that tells it to use kernel mounts instead of FUSE.
- 10:57 AM Backport #41889 (In Progress): nautilus: mgr/volumes: retry spawning purge threads on failure
- 10:57 AM Bug #41892: qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Instead, I think we'll just convert the existing kernel_mount.py code to just use the new functionality so that this ...
- 08:17 AM Bug #41903 (Fix Under Review): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 05:55 AM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- $ ceph fs subvolume create vol00 subvol00
$ ceph fs subvolume getpath vol00 subvol00
The command just hangs and c... - 03:17 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- (gdb) p m->peer
$1 = {cap_id = {v = 2782052343}, seq = {v = 4}, mseq = {v = 0}, mds = {v = 1}, flags = 2 '\002'}
- 01:06 AM Backport #41856 (In Progress): mimic: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30443
- 01:05 AM Backport #41855 (In Progress): nautilus: client: removing dir reports "not empty" issue due to cl...
- https://github.com/ceph/ceph/pull/30442
09/17/2019
- 06:07 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- https://github.com/ceph/ceph/pull/30761
- 02:28 PM Backport #41884 (In Progress): nautilus: mgr/volumes: prevent negative subvolume size
- https://github.com/ceph/ceph/pull/29926
- 08:33 AM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 02:00 PM Backport #41850 (In Progress): nautilus: mgr/volumes: drop unused size in fs volume create
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41835 (Pending Backport): mds: cache drop command does not drive cap recall
- 11:35 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
- document mds-states.rst have a description mistake about states diagram color.
- 10:36 AM Bug #41892 (Resolved): qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Recently, a patchset was merged that added the ability for mount.ceph to discover mon addrs and secrets from a local ...
- 09:26 AM Feature #41842 (In Progress): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- 08:40 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- https://github.com/ceph/ceph/pull/30521
- 08:35 AM Backport #41889 (Resolved): nautilus: mgr/volumes: retry spawning purge threads on failure
- https://github.com/ceph/ceph/pull/30455
- 08:34 AM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30769
- 08:34 AM Backport #41887 (Rejected): mimic: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30931
- 08:34 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30951
- 08:34 AM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30950
- 08:06 AM Bug #41836: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or directory" i...
- I think we can just whitelist the error 'Error recovering journal'
- 07:04 AM Bug #41880 (Resolved): mds:split the dir if the op makes it oversized, because some ops maybe in ...
- 06:48 AM Bug #41841 (Fix Under Review): mgr/volumes: missing protection for `fs volume rm` command
- 04:00 AM Backport #41851 (In Progress): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 03:59 AM Backport #41852 (In Progress): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 03:46 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- I'd like to know why the cap import message's seq is 1, mseq is 0. please use gdb to print cap import message's peer ...
- 12:46 AM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- Jeff Layton wrote:
> I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTAL...
09/16/2019
- 11:50 PM Feature #16656 (Pending Backport): mount.ceph: enable consumption of ceph keyring files
- 08:09 PM Bug #41799 (Fix Under Review): client: FAILED assert(cap == in->auth_cap)
- 08:04 PM Bug #41218 (Pending Backport): mgr/volumes: retry spawning purge threads on failure
- 08:04 PM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
- Backport tracked by #41218.
- 08:00 PM Bug #41310 (Pending Backport): client: lazyio synchronize does not get file size
- 07:59 PM Cleanup #41178 (Resolved): mds: reorg DamageTable header
- 07:59 PM Cleanup #41178 (Pending Backport): mds: reorg DamageTable header
- 07:58 PM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
- 07:55 PM Bug #41585 (Pending Backport): mds: client evicted twice in one tick
- 07:06 PM Bug #41728 (Need More Info): mds: hang during fragmentdir
- 02:28 PM Bug #41728: mds: hang during fragmentdir
- Thanks!
- 01:56 PM Bug #41728: mds: hang during fragmentdir
- Nathan Fish wrote:
> When doing a parallel cp, the active MDS on the CephFS hung on a fragmentdir op.
> It might be... - 06:44 PM Bug #41617 (Pending Backport): mgr/volumes: prevent negative subvolume size
- 06:37 PM Documentation #41451 (Resolved): Document distributed metadata cache
- 05:36 PM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTALE would probably make ...
- 01:46 PM Bug #41868 (Fix Under Review): mds: mds returns -5 error when the deleted file does not exist
- 12:04 PM Bug #41868 (Resolved): mds: mds returns -5 error when the deleted file does not exist
- There are 2 nfs-ganehsa ends:
1.The A side uses readdir to get all the file information in a directory,
and uses ... - 03:01 PM Documentation #41872 (Resolved): doc: update CephFS Quick Start guide
- 02:56 PM Bug #41871: client: return error when someone passes bad whence value to llseek
- s/ceph_assert/ceph_abort/
- 01:52 PM Bug #41871 (Resolved): client: return error when someone passes bad whence value to llseek
- There are a number of ceph_assert calls in src/client/Client.cc that are probably not necessary. There are calls in l...
- 01:48 PM Bug #41837 (Fix Under Review): client: lseek function does not return the correct value.
- 02:41 AM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- If pos is initialized to -1 in the lseek function, then when offset is 0, EINVAL may be returned.
- 11:36 AM Bug #41841 (In Progress): mgr/volumes: missing protection for `fs volume rm` command
- 06:10 AM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- Currently can remove a filesytem, its data and meta data pools, and MDSes with a `fs volume rm` ceph mgr command. May...
- 10:52 AM Feature #40959 (Fix Under Review): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 07:21 AM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- https://github.com/ceph/ceph/pull/30761
- 07:18 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- 07:17 AM Backport #41857 (Resolved): luminous: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/33292
- 07:17 AM Backport #41856 (Resolved): mimic: client: removing dir reports "not empty" issue due to client s...
- https://github.com/ceph/ceph/pull/30443
- 07:17 AM Backport #41855 (Resolved): nautilus: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30442
- 07:15 AM Backport #41854 (Rejected): mimic: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30908
- 07:15 AM Backport #41853 (Resolved): nautilus: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30843
- 07:15 AM Backport #41852 (Resolved): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 07:15 AM Backport #41851 (Resolved): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 07:15 AM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 06:23 AM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- Add commands to list FS subvolume, subvolume groups, and their snapshots
- 01:08 AM Bug #41836 (Resolved): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or d...
- From: /ceph/teuthology-archive/pdonnell-2019-09-15_06:11:06-fs-wip-pdonnell-testing-20190915.030958-distro-basic-smit...
- 12:55 AM Bug #41835: mds: cache drop command does not drive cap recall
- Backport of #22446 is only for nautilus.
- 12:54 AM Bug #41835 (Fix Under Review): mds: cache drop command does not drive cap recall
- 12:50 AM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- ...
09/13/2019
- 07:40 PM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
- Will be backported via #40371.
- 07:40 PM Bug #40371 (Pending Backport): cephfs-shell: du must ignore non-directory files
- 07:08 PM Documentation #40689 (Fix Under Review): mgr/volumes: document mgr fs volumes CLI
- 07:02 PM Bug #41752 (Pending Backport): mgr/volumes: drop unused size in fs volume create
- 06:25 PM Documentation #41826 (Resolved): doc: update CephFS summary and introduction
- 06:24 PM Documentation #41451 (Fix Under Review): Document distributed metadata cache
- 06:22 PM Documentation #41470 (Fix Under Review): Document requirements for using cephfs
- 06:19 PM Documentation #41738 (In Progress): Add documentation for that 'client direct access to data pool'
- 06:17 PM Documentation #41825 (Resolved): CephFS Documentation Sprint 1
- 06:12 PM Feature #41824 (New): mds: aggregate subtree authorities for display in `fs top`
- Each MDS is only aware of subtrees that border its own authoritative subtrees. This also affects rank 0.
Have each... - 03:37 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
- 03:35 PM Feature #22446 (Pending Backport): mds: ask idle client to trim more caps
- 03:34 PM Bug #40746 (Pending Backport): client: removing dir reports "not empty" issue due to client side ...
- 03:32 PM Bug #41329 (Pending Backport): mds: reject sessionless messages
- 03:30 PM Bug #41346 (Pending Backport): mds: MDSIOContextBase instance leak
09/12/2019
- 11:24 PM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
- 11:23 PM Cleanup #41428 (Resolved): mds: reorg InoTable header
- 11:22 PM Cleanup #41607 (Resolved): mds: reorg Anchor header
- 11:20 PM Bug #41654 (Resolved): mds: reorg LocalLock header
- 11:19 PM Cleanup #41679 (Resolved): mds: reorg LogEvent header
- 08:05 PM Bug #41800 (Resolved): qa: logrotate should tolerate connection resets
- During kclient runs, we reboot nodes. The logrotate exception causes the test to fail:...
- 04:49 PM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- The issue affect all releases including master
- 04:49 PM Bug #41799 (Resolved): client: FAILED assert(cap == in->auth_cap)
- below log explains the issue clearly, the auth_caps was set to NULL in previous remove_caps, and when add_update_cap...
- 06:17 AM Documentation #41472 (In Progress): doc: add multiple active MDSs and Subtree Management in CephFS
- 06:05 AM Documentation #41783 (Resolved): doc: document MDSs journaling mechanism and metadata pool
- 04:51 AM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Patrick Donnelly wrote:
> [...]
>
> From the teuthology log.
yeh -- that masks the logging of the actual trace...
09/11/2019
- 10:03 PM Fix #41782 (Resolved): mds: allow stray directories to fragment and switch from 10 stray director...
- Stray directories can become too full which can result in unexpected ENOSPC errors. See for example, #41778.
Evalu... - 05:54 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- ...
- 01:03 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- As seen from the MDS log, there are no filesystem ops after the rename ack to the client. This hints that the purge t...
- 11:01 AM Bug #41759 (Can't reproduce): mgr/volumes: test_async_subvolume_rm fails since purge threads did ...
- Patrick saw this recently here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-09-11_00:33:51-fs-wip-pdonnell-test...
- 03:07 PM Bug #41778 (New): 'No space left on device' due to snapshots
- When using snapshots, we are getting 'no space left on device' when num_strays is close to a million.
We only have l... - 01:05 PM Feature #41763 (New): Support decommissioning of additional data pools
- Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...
- 09:12 AM Feature #40959 (In Progress): mgr/volumes: allow setting uid, gid of subvolume and subvolume grou...
- 12:58 AM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
09/10/2019
- 09:09 PM Bug #39511 (Rejected): Cannot remove CephFS snapshot with leading underscore (_)
- 07:41 PM Bug #41140: mds: trim cache more regularly
- Jan Fajerski wrote:
> As this won't be backported to luminous and many of the mentioned mds options don't exist in l... - 07:33 AM Documentation #41738 (Resolved): Add documentation for that 'client direct access to data pool'
- 05:59 AM Documentation #41725 (In Progress): Document on-disk format of inodes
- 03:34 AM Bug #41651: dbench: command not found
- Patrick Donnelly wrote:
> Can you link to the failure on pulpito?
sorry,I can't link to the failure on pulpito he...
09/09/2019
- 08:38 PM Bug #41398 (Resolved): qa: KeyError: 'cluster' in ceph.stop
- 06:29 PM Bug #41728 (Can't reproduce): mds: hang during fragmentdir
- When doing a parallel cp, the active MDS on the CephFS hung on a fragmentdir op.
It might be this bug: http://lists.... - 02:58 PM Bug #41651 (Fix Under Review): dbench: command not found
- Can you link to the failure on pulpito?
- 01:39 PM Bug #41585 (Fix Under Review): mds: client evicted twice in one tick
- 01:00 PM Documentation #41725 (New): Document on-disk format of inodes
- Document on-disk format of inodes
- 12:17 PM Feature #41182 (Fix Under Review): mgr/volumes: add `fs subvolume extend/shrink` commands
- 12:16 PM Bug #41694 (Fix Under Review): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- 08:46 AM Documentation #41470: Document requirements for using cephfs
- 03:41 AM Bug #41426: mds: wrongly signals directory is empty when dentry is damaged?
- The readdir is unexpected and log does not include it. Client should just issue an lookup.
See https://github.co...
09/07/2019
- 09:21 AM Backport #41489 (In Progress): luminous: client: client should return EIO when it's unsafe reqs h...
- 09:19 AM Backport #41487 (In Progress): mimic: client: client should return EIO when it's unsafe reqs have...
- 09:18 AM Backport #41476 (Rejected): mimic: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= be...
- this is fixing an issue in InoTable::force_consume_to(), a function that was added by feature PR https://github.com/c...
- 09:13 AM Backport #41466 (In Progress): mimic: mount.ceph: doesn't accept "strictatime"
- 09:01 AM Backport #40899 (In Progress): mimic: mds: only evict an unresponsive client when another client ...
- 08:55 AM Backport #40896 (In Progress): mimic: ceph_volume_client: fs_name must be converted to string bef...
- 08:49 AM Backport #40886 (Need More Info): mimic: ceph_volume_client: to_bytes converts NoneType object str
- 08:49 AM Backport #40856 (Need More Info): mimic: ceph_volume_client: python program embedded in test_volu...
- 08:45 AM Backport #40853 (In Progress): mimic: test_volume_client: test_put_object_versioned is unreliable
- 08:42 AM Backport #40844 (In Progress): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
- 08:41 AM Backport #40444 (In Progress): mimic: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
- 08:33 AM Backport #41114 (Need More Info): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- non-trivial conflicts
- 07:55 AM Backport #41129 (In Progress): mimic: qa: power off still resulted in client sending session close
- 03:01 AM Bug #41694 (Resolved): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- Possible:
> TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType"
09/06/2019
- 10:31 PM Backport #40841 (In Progress): mimic: ceph-fuse: mount does not support the fallocate()
- 04:08 PM Documentation #41688 (Resolved): doc: client config reference improvements
- The client config docs
https://docs.ceph.com/docs/master/cephfs/client-config-ref/
does not mention where these... - 08:59 AM Cleanup #41679 (Fix Under Review): mds: reorg LogEvent header
- 07:45 AM Cleanup #41679 (Resolved): mds: reorg LogEvent header
- 07:49 AM Cleanup #41678 (Fix Under Review): mds: reorg LogSegment header
- 07:44 AM Cleanup #41678 (Resolved): mds: reorg LogSegment header
- 07:20 AM Bug #39511: Cannot remove CephFS snapshot with leading underscore (_)
- Sorry for my late reply. Yes this is correct. I improved my script and it works now.
Thanks.
You can close the i... - 05:10 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
- happend again
09/05/2019
- 06:43 AM Cleanup #41665 (Fix Under Review): mds: reorg Locker header
- 06:39 AM Cleanup #41665 (Resolved): mds: reorg Locker header
09/04/2019
- 01:56 PM Bug #41654 (Fix Under Review): mds: reorg LocalLock header
- 01:54 PM Bug #41654 (Resolved): mds: reorg LocalLock header
- 01:41 PM Bug #41651 (Closed): dbench: command not found
- run tautology: qa/suites/fs/verify/tasks/cfuse_workunit_suites_dbench.yaml
teuthology log:... - 07:28 AM Bug #41140: mds: trim cache more regularly
- As this won't be backported to luminous and many of the mentioned mds options don't exist in luminous, is there a way...
09/03/2019
- 02:51 PM Backport #40494 (In Progress): mimic: test_volume_client: declare only one default for python ver...
- 02:47 PM Backport #40442 (In Progress): mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
- 01:41 PM Bug #41617 (Fix Under Review): mgr/volumes: prevent negative subvolume size
- 01:40 PM Bug #41617 (Resolved): mgr/volumes: prevent negative subvolume size
- $ ./bin/ceph fs subvolume create myfs mysubvol --size -10 --group_name mygroup --pool_layout mycephfs_data --mode 777...
- 06:09 AM Cleanup #41607 (Fix Under Review): mds: reorg Anchor header
- 06:06 AM Cleanup #41607 (Resolved): mds: reorg Anchor header
Also available in: Atom