Activity
From 08/26/2019 to 09/24/2019
09/24/2019
- 07:54 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30763
- 07:54 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30934
- 07:54 PM Backport #42038 (Resolved): mimic: client: _readdir_cache_cb() may use the readdir_cache already ...
- https://github.com/ceph/ceph/pull/30933
- 07:52 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30762
- 07:52 PM Backport #42034 (Resolved): mimic: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30932
- 06:05 PM Bug #42020 (Fix Under Review): qa: fuse_mount should check if mounted in umount_wait
- 08:20 AM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- ...
- 11:32 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
- 11:30 AM Bug #41148 (Pending Backport): client: _readdir_cache_cb() may use the readdir_cache already clear
- 11:16 AM Cleanup #41665 (Resolved): mds: reorg Locker header
- 11:11 AM Bug #41837 (Pending Backport): client: lseek function does not return the correct value.
- 08:34 AM Bug #42022 (Need More Info): mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to...
- ...
- 07:03 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate)... - 03:04 AM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
- 03:00 AM Documentation #41872 (Fix Under Review): doc: update CephFS Quick Start guide
- 01:47 AM Documentation #42016 (Resolved): doc: layout rest of intro page
- Include links to different sections of CephFS Documentation: "Concepts" (architecture), "Getting Started", "Mounting"...
09/23/2019
- 05:30 PM Backport #41890 (In Progress): nautilus: mount.ceph: enable consumption of ceph keyring files
- 04:19 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- Should pick up backport for https://tracker.ceph.com/issues/41892 as well once it's merged.
- 05:16 PM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate) the Pull request ID fie...
- 01:59 PM Documentation #41999 (Resolved): CephFS Documentation Sprint 2
- 01:24 PM Documentation #41952 (In Progress): doc: cleanup CephFS landing page
- 07:35 AM Documentation #41952 (Resolved): doc: cleanup CephFS landing page
- Remove or move links on CephFS landing page to Table of Contents on the left
- 12:53 PM Bug #41337 (Resolved): mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/sub...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41371 (Resolved): mgr/volumes: subvolume and subvolume group path exists even when creation ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41617 (Resolved): mgr/volumes: prevent negative subvolume size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:23 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Backport of follow-on fix: https://github.com/ceph/ceph/pull/30508
- 12:17 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- 12:16 PM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 12:16 PM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 12:16 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
- 12:16 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
- 06:59 AM Feature #41842 (Fix Under Review): mgr/volumes: list FS subvolumes, subvolume groups, and their s...
- 06:57 AM Documentation #40689 (Resolved): mgr/volumes: document mgr fs volumes CLI
- 06:50 AM Cleanup #41951 (Resolved): mds: obsolete mds_cache_size
- mds_cache_memory_limit is preferred. Remove last bits of support for mds_cache_size.
- 05:06 AM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Should also backport https://tracker.ceph.com/issues/41899
- 04:22 AM Feature #41910 (Fix Under Review): qa: allow vstart_runner to perform tests on kclient mounts
- 04:17 AM Bug #41892 (Fix Under Review): qa: convert kcephfs qa tests to use mount.ceph auto-discovery feat...
- 02:11 AM Bug #41935 (Duplicate): ceph mdss keep on crashing
- 02:04 AM Bug #41948 (Fix Under Review): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode d...
09/22/2019
- 07:34 AM Bug #41935: ceph mdss keep on crashing
- https://tracker.ceph.com/issues/41948
- 06:52 AM Bug #41948 (Resolved): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not ...
- backport #40445 incomplete
- 05:06 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
09/20/2019
- 12:43 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- At a high level, here's what I think we need to do:
Add a new delegated_inos field to session_info_t in the MDS co... - 09:59 AM Backport #41933 (In Progress): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hang...
- https://github.com/ceph/ceph/pull/29926
09/19/2019
- 03:42 PM Bug #41935: ceph mdss keep on crashing
- Looks like the backport of https://tracker.ceph.com/issues/39987 to nautilus was incomplete, it's missing https://git...
- 02:23 PM Bug #41935: ceph mdss keep on crashing
- ...
- 02:21 PM Bug #41935 (Duplicate): ceph mdss keep on crashing
- I updated ceph to 14.2.3 yesterday. everything was running fine, but today mds start crashing. I tried restarting all...
- 01:30 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41903 (Pending Backport): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 10:10 AM Bug #40283 (Fix Under Review): qa: add testing for lazyio
09/18/2019
- 02:12 PM Feature #41910 (Resolved): qa: allow vstart_runner to perform tests on kclient mounts
- Add a new --kclient switch to vstart_runner that tells it to use kernel mounts instead of FUSE.
- 10:57 AM Backport #41889 (In Progress): nautilus: mgr/volumes: retry spawning purge threads on failure
- 10:57 AM Bug #41892: qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Instead, I think we'll just convert the existing kernel_mount.py code to just use the new functionality so that this ...
- 08:17 AM Bug #41903 (Fix Under Review): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 05:55 AM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- $ ceph fs subvolume create vol00 subvol00
$ ceph fs subvolume getpath vol00 subvol00
The command just hangs and c... - 03:17 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- (gdb) p m->peer
$1 = {cap_id = {v = 2782052343}, seq = {v = 4}, mseq = {v = 0}, mds = {v = 1}, flags = 2 '\002'}
- 01:06 AM Backport #41856 (In Progress): mimic: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30443
- 01:05 AM Backport #41855 (In Progress): nautilus: client: removing dir reports "not empty" issue due to cl...
- https://github.com/ceph/ceph/pull/30442
09/17/2019
- 06:07 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- https://github.com/ceph/ceph/pull/30761
- 02:28 PM Backport #41884 (In Progress): nautilus: mgr/volumes: prevent negative subvolume size
- https://github.com/ceph/ceph/pull/29926
- 08:33 AM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 02:00 PM Backport #41850 (In Progress): nautilus: mgr/volumes: drop unused size in fs volume create
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41835 (Pending Backport): mds: cache drop command does not drive cap recall
- 11:35 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
- document mds-states.rst have a description mistake about states diagram color.
- 10:36 AM Bug #41892 (Resolved): qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Recently, a patchset was merged that added the ability for mount.ceph to discover mon addrs and secrets from a local ...
- 09:26 AM Feature #41842 (In Progress): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- 08:40 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- https://github.com/ceph/ceph/pull/30521
- 08:35 AM Backport #41889 (Resolved): nautilus: mgr/volumes: retry spawning purge threads on failure
- https://github.com/ceph/ceph/pull/30455
- 08:34 AM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30769
- 08:34 AM Backport #41887 (Rejected): mimic: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30931
- 08:34 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30951
- 08:34 AM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30950
- 08:06 AM Bug #41836: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or directory" i...
- I think we can just whitelist the error 'Error recovering journal'
- 07:04 AM Bug #41880 (Resolved): mds:split the dir if the op makes it oversized, because some ops maybe in ...
- 06:48 AM Bug #41841 (Fix Under Review): mgr/volumes: missing protection for `fs volume rm` command
- 04:00 AM Backport #41851 (In Progress): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 03:59 AM Backport #41852 (In Progress): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 03:46 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- I'd like to know why the cap import message's seq is 1, mseq is 0. please use gdb to print cap import message's peer ...
- 12:46 AM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- Jeff Layton wrote:
> I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTAL...
09/16/2019
- 11:50 PM Feature #16656 (Pending Backport): mount.ceph: enable consumption of ceph keyring files
- 08:09 PM Bug #41799 (Fix Under Review): client: FAILED assert(cap == in->auth_cap)
- 08:04 PM Bug #41218 (Pending Backport): mgr/volumes: retry spawning purge threads on failure
- 08:04 PM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
- Backport tracked by #41218.
- 08:00 PM Bug #41310 (Pending Backport): client: lazyio synchronize does not get file size
- 07:59 PM Cleanup #41178 (Resolved): mds: reorg DamageTable header
- 07:59 PM Cleanup #41178 (Pending Backport): mds: reorg DamageTable header
- 07:58 PM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
- 07:55 PM Bug #41585 (Pending Backport): mds: client evicted twice in one tick
- 07:06 PM Bug #41728 (Need More Info): mds: hang during fragmentdir
- 02:28 PM Bug #41728: mds: hang during fragmentdir
- Thanks!
- 01:56 PM Bug #41728: mds: hang during fragmentdir
- Nathan Fish wrote:
> When doing a parallel cp, the active MDS on the CephFS hung on a fragmentdir op.
> It might be... - 06:44 PM Bug #41617 (Pending Backport): mgr/volumes: prevent negative subvolume size
- 06:37 PM Documentation #41451 (Resolved): Document distributed metadata cache
- 05:36 PM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTALE would probably make ...
- 01:46 PM Bug #41868 (Fix Under Review): mds: mds returns -5 error when the deleted file does not exist
- 12:04 PM Bug #41868 (Resolved): mds: mds returns -5 error when the deleted file does not exist
- There are 2 nfs-ganehsa ends:
1.The A side uses readdir to get all the file information in a directory,
and uses ... - 03:01 PM Documentation #41872 (Resolved): doc: update CephFS Quick Start guide
- 02:56 PM Bug #41871: client: return error when someone passes bad whence value to llseek
- s/ceph_assert/ceph_abort/
- 01:52 PM Bug #41871 (Resolved): client: return error when someone passes bad whence value to llseek
- There are a number of ceph_assert calls in src/client/Client.cc that are probably not necessary. There are calls in l...
- 01:48 PM Bug #41837 (Fix Under Review): client: lseek function does not return the correct value.
- 02:41 AM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- If pos is initialized to -1 in the lseek function, then when offset is 0, EINVAL may be returned.
- 11:36 AM Bug #41841 (In Progress): mgr/volumes: missing protection for `fs volume rm` command
- 06:10 AM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- Currently can remove a filesytem, its data and meta data pools, and MDSes with a `fs volume rm` ceph mgr command. May...
- 10:52 AM Feature #40959 (Fix Under Review): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 07:21 AM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- https://github.com/ceph/ceph/pull/30761
- 07:18 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- 07:17 AM Backport #41857 (Resolved): luminous: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/33292
- 07:17 AM Backport #41856 (Resolved): mimic: client: removing dir reports "not empty" issue due to client s...
- https://github.com/ceph/ceph/pull/30443
- 07:17 AM Backport #41855 (Resolved): nautilus: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30442
- 07:15 AM Backport #41854 (Rejected): mimic: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30908
- 07:15 AM Backport #41853 (Resolved): nautilus: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30843
- 07:15 AM Backport #41852 (Resolved): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 07:15 AM Backport #41851 (Resolved): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 07:15 AM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 06:23 AM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- Add commands to list FS subvolume, subvolume groups, and their snapshots
- 01:08 AM Bug #41836 (Resolved): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or d...
- From: /ceph/teuthology-archive/pdonnell-2019-09-15_06:11:06-fs-wip-pdonnell-testing-20190915.030958-distro-basic-smit...
- 12:55 AM Bug #41835: mds: cache drop command does not drive cap recall
- Backport of #22446 is only for nautilus.
- 12:54 AM Bug #41835 (Fix Under Review): mds: cache drop command does not drive cap recall
- 12:50 AM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- ...
09/13/2019
- 07:40 PM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
- Will be backported via #40371.
- 07:40 PM Bug #40371 (Pending Backport): cephfs-shell: du must ignore non-directory files
- 07:08 PM Documentation #40689 (Fix Under Review): mgr/volumes: document mgr fs volumes CLI
- 07:02 PM Bug #41752 (Pending Backport): mgr/volumes: drop unused size in fs volume create
- 06:25 PM Documentation #41826 (Resolved): doc: update CephFS summary and introduction
- 06:24 PM Documentation #41451 (Fix Under Review): Document distributed metadata cache
- 06:22 PM Documentation #41470 (Fix Under Review): Document requirements for using cephfs
- 06:19 PM Documentation #41738 (In Progress): Add documentation for that 'client direct access to data pool'
- 06:17 PM Documentation #41825 (Resolved): CephFS Documentation Sprint 1
- 06:12 PM Feature #41824 (New): mds: aggregate subtree authorities for display in `fs top`
- Each MDS is only aware of subtrees that border its own authoritative subtrees. This also affects rank 0.
Have each... - 03:37 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
- 03:35 PM Feature #22446 (Pending Backport): mds: ask idle client to trim more caps
- 03:34 PM Bug #40746 (Pending Backport): client: removing dir reports "not empty" issue due to client side ...
- 03:32 PM Bug #41329 (Pending Backport): mds: reject sessionless messages
- 03:30 PM Bug #41346 (Pending Backport): mds: MDSIOContextBase instance leak
09/12/2019
- 11:24 PM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
- 11:23 PM Cleanup #41428 (Resolved): mds: reorg InoTable header
- 11:22 PM Cleanup #41607 (Resolved): mds: reorg Anchor header
- 11:20 PM Bug #41654 (Resolved): mds: reorg LocalLock header
- 11:19 PM Cleanup #41679 (Resolved): mds: reorg LogEvent header
- 08:05 PM Bug #41800 (Resolved): qa: logrotate should tolerate connection resets
- During kclient runs, we reboot nodes. The logrotate exception causes the test to fail:...
- 04:49 PM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- The issue affect all releases including master
- 04:49 PM Bug #41799 (Resolved): client: FAILED assert(cap == in->auth_cap)
- below log explains the issue clearly, the auth_caps was set to NULL in previous remove_caps, and when add_update_cap...
- 06:17 AM Documentation #41472 (In Progress): doc: add multiple active MDSs and Subtree Management in CephFS
- 06:05 AM Documentation #41783 (Resolved): doc: document MDSs journaling mechanism and metadata pool
- 04:51 AM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Patrick Donnelly wrote:
> [...]
>
> From the teuthology log.
yeh -- that masks the logging of the actual trace...
09/11/2019
- 10:03 PM Fix #41782 (Resolved): mds: allow stray directories to fragment and switch from 10 stray director...
- Stray directories can become too full which can result in unexpected ENOSPC errors. See for example, #41778.
Evalu... - 05:54 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- ...
- 01:03 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- As seen from the MDS log, there are no filesystem ops after the rename ack to the client. This hints that the purge t...
- 11:01 AM Bug #41759 (Can't reproduce): mgr/volumes: test_async_subvolume_rm fails since purge threads did ...
- Patrick saw this recently here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-09-11_00:33:51-fs-wip-pdonnell-test...
- 03:07 PM Bug #41778 (New): 'No space left on device' due to snapshots
- When using snapshots, we are getting 'no space left on device' when num_strays is close to a million.
We only have l... - 01:05 PM Feature #41763 (New): Support decommissioning of additional data pools
- Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...
- 09:12 AM Feature #40959 (In Progress): mgr/volumes: allow setting uid, gid of subvolume and subvolume grou...
- 12:58 AM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
09/10/2019
- 09:09 PM Bug #39511 (Rejected): Cannot remove CephFS snapshot with leading underscore (_)
- 07:41 PM Bug #41140: mds: trim cache more regularly
- Jan Fajerski wrote:
> As this won't be backported to luminous and many of the mentioned mds options don't exist in l... - 07:33 AM Documentation #41738 (Resolved): Add documentation for that 'client direct access to data pool'
- 05:59 AM Documentation #41725 (In Progress): Document on-disk format of inodes
- 03:34 AM Bug #41651: dbench: command not found
- Patrick Donnelly wrote:
> Can you link to the failure on pulpito?
sorry,I can't link to the failure on pulpito he...
09/09/2019
- 08:38 PM Bug #41398 (Resolved): qa: KeyError: 'cluster' in ceph.stop
- 06:29 PM Bug #41728 (Can't reproduce): mds: hang during fragmentdir
- When doing a parallel cp, the active MDS on the CephFS hung on a fragmentdir op.
It might be this bug: http://lists.... - 02:58 PM Bug #41651 (Fix Under Review): dbench: command not found
- Can you link to the failure on pulpito?
- 01:39 PM Bug #41585 (Fix Under Review): mds: client evicted twice in one tick
- 01:00 PM Documentation #41725 (New): Document on-disk format of inodes
- Document on-disk format of inodes
- 12:17 PM Feature #41182 (Fix Under Review): mgr/volumes: add `fs subvolume extend/shrink` commands
- 12:16 PM Bug #41694 (Fix Under Review): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- 08:46 AM Documentation #41470: Document requirements for using cephfs
- 03:41 AM Bug #41426: mds: wrongly signals directory is empty when dentry is damaged?
- The readdir is unexpected and log does not include it. Client should just issue an lookup.
See https://github.co...
09/07/2019
- 09:21 AM Backport #41489 (In Progress): luminous: client: client should return EIO when it's unsafe reqs h...
- 09:19 AM Backport #41487 (In Progress): mimic: client: client should return EIO when it's unsafe reqs have...
- 09:18 AM Backport #41476 (Rejected): mimic: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= be...
- this is fixing an issue in InoTable::force_consume_to(), a function that was added by feature PR https://github.com/c...
- 09:13 AM Backport #41466 (In Progress): mimic: mount.ceph: doesn't accept "strictatime"
- 09:01 AM Backport #40899 (In Progress): mimic: mds: only evict an unresponsive client when another client ...
- 08:55 AM Backport #40896 (In Progress): mimic: ceph_volume_client: fs_name must be converted to string bef...
- 08:49 AM Backport #40886 (Need More Info): mimic: ceph_volume_client: to_bytes converts NoneType object str
- 08:49 AM Backport #40856 (Need More Info): mimic: ceph_volume_client: python program embedded in test_volu...
- 08:45 AM Backport #40853 (In Progress): mimic: test_volume_client: test_put_object_versioned is unreliable
- 08:42 AM Backport #40844 (In Progress): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
- 08:41 AM Backport #40444 (In Progress): mimic: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
- 08:33 AM Backport #41114 (Need More Info): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- non-trivial conflicts
- 07:55 AM Backport #41129 (In Progress): mimic: qa: power off still resulted in client sending session close
- 03:01 AM Bug #41694 (Resolved): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- Possible:
> TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType"
09/06/2019
- 10:31 PM Backport #40841 (In Progress): mimic: ceph-fuse: mount does not support the fallocate()
- 04:08 PM Documentation #41688 (Resolved): doc: client config reference improvements
- The client config docs
https://docs.ceph.com/docs/master/cephfs/client-config-ref/
does not mention where these... - 08:59 AM Cleanup #41679 (Fix Under Review): mds: reorg LogEvent header
- 07:45 AM Cleanup #41679 (Resolved): mds: reorg LogEvent header
- 07:49 AM Cleanup #41678 (Fix Under Review): mds: reorg LogSegment header
- 07:44 AM Cleanup #41678 (Resolved): mds: reorg LogSegment header
- 07:20 AM Bug #39511: Cannot remove CephFS snapshot with leading underscore (_)
- Sorry for my late reply. Yes this is correct. I improved my script and it works now.
Thanks.
You can close the i... - 05:10 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
- happend again
09/05/2019
- 06:43 AM Cleanup #41665 (Fix Under Review): mds: reorg Locker header
- 06:39 AM Cleanup #41665 (Resolved): mds: reorg Locker header
09/04/2019
- 01:56 PM Bug #41654 (Fix Under Review): mds: reorg LocalLock header
- 01:54 PM Bug #41654 (Resolved): mds: reorg LocalLock header
- 01:41 PM Bug #41651 (Closed): dbench: command not found
- run tautology: qa/suites/fs/verify/tasks/cfuse_workunit_suites_dbench.yaml
teuthology log:... - 07:28 AM Bug #41140: mds: trim cache more regularly
- As this won't be backported to luminous and many of the mentioned mds options don't exist in luminous, is there a way...
09/03/2019
- 02:51 PM Backport #40494 (In Progress): mimic: test_volume_client: declare only one default for python ver...
- 02:47 PM Backport #40442 (In Progress): mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
- 01:41 PM Bug #41617 (Fix Under Review): mgr/volumes: prevent negative subvolume size
- 01:40 PM Bug #41617 (Resolved): mgr/volumes: prevent negative subvolume size
- $ ./bin/ceph fs subvolume create myfs mysubvol --size -10 --group_name mygroup --pool_layout mycephfs_data --mode 777...
- 06:09 AM Cleanup #41607 (Fix Under Review): mds: reorg Anchor header
- 06:06 AM Cleanup #41607 (Resolved): mds: reorg Anchor header
09/01/2019
- 02:32 PM Bug #40297 (Resolved): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm or rmdir
- Issue fixed by this PR: https://github.com/ceph/ceph/pull/29552
- 02:18 PM Backport #41269 (In Progress): nautilus: cephfs-shell: Convert files path type from string to bytes
- I have fixed the conflicts. Please review the PR: https://github.com/ceph/ceph/pull/30057
08/30/2019
- 02:46 PM Backport #41488 (In Progress): nautilus: client: client should return EIO when it's unsafe reqs h...
- 02:46 PM Backport #41488 (New): nautilus: client: client should return EIO when it's unsafe reqs have been...
- 02:45 PM Backport #41488 (In Progress): nautilus: client: client should return EIO when it's unsafe reqs h...
- 02:33 PM Backport #41477 (In Progress): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second...
- 12:47 PM Backport #41468 (Need More Info): mimic: mds: recall capabilities more regularly when under cache...
- non-trivial
- 12:45 PM Bug #41140 (Resolved): mds: trim cache more regularly
- Since #41141 is fixed by the same PR, we'll handle the backports there.
- 12:36 PM Backport #41467 (In Progress): nautilus: mds: recall capabilities more regularly when under cache...
- 12:30 PM Backport #41465 (In Progress): nautilus: mount.ceph: doesn't accept "strictatime"
- 12:28 PM Backport #41276 (In Progress): nautilus: qa: malformed job
- 09:40 AM Backport #41283 (In Progress): nautilus: cephfs-shell: No error message is printed on ls of inval...
- 09:31 AM Backport #41113 (In Progress): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 09:22 AM Backport #40900 (In Progress): nautilus: mds: only evict an unresponsive client when another clie...
- 09:01 AM Backport #40897 (In Progress): nautilus: ceph_volume_client: fs_name must be converted to string ...
- 09:00 AM Backport #40495 (In Progress): nautilus: test_volume_client: declare only one default for python ...
- 08:59 AM Backport #40857 (In Progress): nautilus: ceph_volume_client: python program embedded in test_volu...
- 08:58 AM Backport #40854 (In Progress): nautilus: test_volume_client: test_put_object_versioned is unreliable
- 08:57 AM Backport #40887 (In Progress): nautilus: ceph_volume_client: to_bytes converts NoneType object str
- 08:55 AM Bug #39510: test_volume_client: test_put_object_versioned is unreliable
- 27718 was replaced by https://github.com/ceph/ceph/pull/28692
- 08:50 AM Bug #39405: ceph_volume_client: python program embedded in test_volume_client.py use python2.7
- 27718 was replaced by https://github.com/ceph/ceph/pull/28692
- 08:45 AM Backport #41112 (In Progress): nautilus: cephfs-shell: cd with no args has no effect
- 08:38 AM Backport #41269 (Need More Info): nautilus: cephfs-shell: Convert files path type from string to ...
- conflicts
- 08:37 AM Backport #41268 (In Progress): nautilus: cephfs-shell: onecmd throws TypeError
- 08:34 AM Backport #41118 (In Progress): nautilus: cephfs-shell: add CI testing with flake8
- 08:12 AM Bug #41585 (Resolved): mds: client evicted twice in one tick
- 2019-08-09 14:41:39.292140 7fd33eba7700 0 log_channel(cluster) log [WRN] : client id 2646901 has not responded to ca...
- 08:12 AM Backport #41105 (In Progress): nautilus: cephfs-shell: flake8 blank line and indentation error
- 08:11 AM Backport #40898 (In Progress): nautilus: cephfs-shell: Error messages are printed to stdout
- 08:09 AM Backport #40895 (In Progress): nautilus: pybind: Add standard error message and fix print of path...
- 08:06 AM Backport #40131 (In Progress): nautilus: Document behaviour of fsync-after-close
- 07:47 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- It's Bluestore on spinning disks. I don't really have an overview of the data distribution, it's very uneven. Perhaps...
- 05:40 AM Bug #41581 (In Progress): pybind/mgr: Fix subvolume options
- > $ ./bin/ceph fs subvolume create
> Invalid command: missing required parameter vol_name(<string>)
> fs subvolume ...
08/29/2019
- 11:38 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Janek Bevendorff wrote:
> Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual ... - 09:30 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual data with a replication fac...
- 05:42 AM Backport #41128 (In Progress): nautilus: qa: power off still resulted in client sending session c...
- https://github.com/ceph/ceph/pull/29983
08/28/2019
- 10:22 PM Feature #41566 (In Progress): mds: support rolling upgrades
- The MDS currently does not support rolling upgrades. Normally we recommend upgrading all MDS at the same time for thi...
- 10:13 PM Bug #41565 (Resolved): mds: detect MDS<->MDS messages that are not versioned
- Inter-MDS messages are now versioned. We should add a check that confirms that no current or new messages sent betwee...
- 10:07 PM Bug #14807 (Can't reproduce): MDS crashes repeatedly after upgrade to Infernalis from Hammer
- 10:07 PM Feature #15506 (Resolved): qa: run at least one upgrade test in the FS suite
- We've been doing testing of this since Mimic in fs:upgrade.
- 10:03 PM Feature #12107 (Resolved): mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- 02:16 AM Backport #41108 (In Progress): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29940
- 12:50 AM Backport #41107 (In Progress): nautilus: mds: disallow setting ceph.dir.pin value exceeding max r...
- https://github.com/ceph/ceph/pull/29938
08/27/2019
- 10:38 PM Bug #41541 (Resolved): mgr/volumes: ephemerally pin volumes
- Apply export_ephemeral_distributed to volumes by default. Provide the option to change this to default balancer.
- 04:42 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
- See Kefu's excellent synopsis of the problem: https://tracker.ceph.com/issues/41525#note-3
- 01:12 PM Backport #40343: luminous: mds: fix corner case of replaying open sessions
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28536
m... - 01:12 PM Backport #40041: luminous: avoid trimming too many log segments after mds failover
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28543
m... - 01:12 PM Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28544
m... - 10:58 AM Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27040
m... - 10:58 AM Backport #38445: luminous: mds: drop cache does not timeout as expected
- backport PR https://github.com/ceph/ceph/pull/27342
merge commit 5154062f2c4a1499ce74a518eb7bb54e9560aad5 (v12.2.12-... - 10:58 AM Backport #38340: luminous: mds: may leak gather during cache drop
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27342
m... - 10:58 AM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27679
m... - 10:57 AM Backport #39191: luminous: mds: crash during mds restart
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27737
m... - 10:57 AM Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27801
m... - 10:56 AM Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27840
m... - 10:55 AM Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27848
m... - 10:55 AM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28432
m... - 10:55 AM Backport #39231: luminous: kclient: nofail option not supported
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28436
m... - 10:55 AM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28437
m... - 10:54 AM Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28438
m... - 09:36 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
- Look at OpenStack manila's cephfs driver extend_share and shrink_share method implementation,
https://github.com/o... - 09:09 AM Backport #41444 (In Progress): nautilus: mgr/volumes: handle incorrect pool_layout setting during...
- https://github.com/ceph/ceph/pull/29926
- 09:09 AM Backport #41437 (In Progress): nautilus: mgr/volumes: subvolume and subvolume group path exists e...
- https://github.com/ceph/ceph/pull/29926
- 08:51 AM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:51 AM Bug #24133 (Resolved): mds: broadcast quota to relevant clients when quota is explicitly set
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:22 AM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28918
m... - 07:22 AM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29187
m... - 07:22 AM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29200
m... - 07:21 AM Backport #38099: mimic: mds: remove cache drop admin socket command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29210
m... - 07:21 AM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29211
m... - 03:21 AM Backport #41100 (In Progress): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29915
- 03:19 AM Backport #41106 (In Progress): nautilus: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/29914
08/26/2019
- 08:26 PM Backport #39233: mimic: kclient: nofail option not supported
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28090
m... - 08:26 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28186
m... - 08:25 PM Backport #39669: mimic: mds: output lock state in format dump
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28274
m... - 08:25 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28337
m... - 08:25 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28381
m... - 08:25 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28501
m... - 08:24 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28579
m... - 08:24 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28650
m... - 03:08 PM Backport #41508 (Resolved): nautilus: add information about active scrubs to "ceph -s" (and elsew...
- https://github.com/ceph/ceph/pull/30704
- 02:56 PM Bug #40489 (Resolved): cephfs-shell: name 'files' is not defined error in do_rm()
- 02:55 PM Bug #40679 (Resolved): cephfs-shell: TypeError in poutput
- 02:55 PM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- https://github.com/ceph/ceph/pull/31040
- 02:51 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:50 PM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
- https://github.com/ceph/ceph/pull/30242
- 02:50 PM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
- https://github.com/ceph/ceph/pull/30043
- 02:50 PM Backport #41487 (Resolved): mimic: client: client should return EIO when it's unsafe reqs have be...
- https://github.com/ceph/ceph/pull/30241
- 02:49 PM Backport #41477 (Resolved): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >=...
- https://github.com/ceph/ceph/pull/30041
- 02:49 PM Backport #41476 (Rejected): mimic: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= be...
- 02:46 PM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
- Give a technical description on how subtrees are handled by MDSs. Also do the same for multiple acive MDSs.
- 02:45 PM Documentation #41470 (Resolved): Document requirements for using cephfs
- Communicate high-level requirements (e.g. need 1-2 MDS; at least 2 pools; key auth and distribution)
- 02:44 PM Bug #41434 (Fix Under Review): mds: infinite loop in Locker::file_update_finish()
- 12:53 PM Bug #41434 (Resolved): mds: infinite loop in Locker::file_update_finish()
- ...
- 02:43 PM Backport #41468 (Rejected): mimic: mds: recall capabilities more regularly when under cache pressure
- 02:43 PM Backport #41467 (Resolved): nautilus: mds: recall capabilities more regularly when under cache pr...
- https://github.com/ceph/ceph/pull/30040
- 02:43 PM Backport #41466 (Resolved): mimic: mount.ceph: doesn't accept "strictatime"
- https://github.com/ceph/ceph/pull/30240
- 02:43 PM Backport #41465 (Resolved): nautilus: mount.ceph: doesn't accept "strictatime"
- https://github.com/ceph/ceph/pull/30039
- 02:30 PM Documentation #41451 (Resolved): Document distributed metadata cache
- Explain distributed metadata cache maintained by MDS/clients. This should touch on capabilities, cache management, an...
- 02:22 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
- 02:21 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
- 01:50 PM Bug #41419: mds: missing dirfrag damaged check before CDir::fetch
- 0> 2019-08-23 15:51:03.871241 7f990ee3e700 -1 /build/ceph-12.2.8/src/include/elist.h: In function 'elist<T>::~elist()...
- 09:04 AM Cleanup #41430 (Fix Under Review): mds: reorg JournalPointer header
- 09:00 AM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
- 08:58 AM Backport #41002: nautilus:client: failed to drop dn and release caps causing mds stary stacking.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29478
m... - 07:36 AM Cleanup #41428 (Fix Under Review): mds: reorg InoTable header
- 07:30 AM Cleanup #41428 (Resolved): mds: reorg InoTable header
- 03:50 AM Backport #41099 (In Progress): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29879
- 03:46 AM Backport #41096 (In Progress): nautilus: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29878
Also available in: Atom