Activity
From 11/08/2017 to 12/07/2017
12/07/2017
- 03:09 AM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
- https://github.com/ceph/ceph/pull/19370
- 03:01 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
- https://github.com/ceph/ceph/pull/19370
- 03:08 AM Bug #22338: mds: ceph mds stat json should use array output for info section
- Ji You wrote:
> When use `ceph mds stat -f json-pretty` would get output as below:
>
> [...]
>
> The proper ou... - 02:58 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
- When use `ceph mds stat -f json-pretty` would get output as below:...
12/06/2017
- 09:12 PM Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way
- Hi Douglas, is this something that you're still planning on working on? If not, I'm willing to have a look at it.
- 02:51 PM Bug #22334 (New): client: throttle osd requests created by page-write
- If we create lots of small file in cephfs, page writeback may create hundreds of thousand of OSD requests. these many...
- 08:51 AM Bug #22219: mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19360
- 06:37 AM Bug #22219 (Pending Backport): mds: mds should ignore export_pin for deleted directory
- 06:37 AM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
- 06:36 AM Bug #22269 (Pending Backport): ceph-fuse: failure to remount in startup test does not handle clie...
12/05/2017
- 03:00 AM Bug #22263: client reconnect gather race
- https://github.com/ceph/ceph/pull/19326
- 02:54 AM Bug #22249: Need to restart MDS to release cephfs space
- It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try executing "echo 3 >/proc/...
- 12:33 AM Bug #22051: tests: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY...
- Please make sure this isn't a misconfigured run or a missing log whitelist; you can kick it to RADOS if not. :)
12/04/2017
- 06:54 PM Feature #18490 (Pending Backport): client: implement delegation support in userland cephfs
- Thanks for remembering to update this ticket Jeff. We need to backport this for Luminous as this is needed for 3.0.
... - 02:52 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Greg also brought up some good points that we should also mark the directory as damaged (especially in a persistent w...
- 02:45 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Consensus during scrub is that this can be resolved by adding an appropriate warning to scrub output that the directo...
12/03/2017
- 10:55 AM Feature #18490 (Resolved): client: implement delegation support in userland cephfs
- Patches merged into both ceph and ganesha for this.
12/01/2017
- 10:03 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
- This was fixed by commit f332c172a2884c04a0d4e743c8858ff3e7f957a1 in ganesha (and the associated ntirpc changes).
- 09:39 PM Bug #22256 (In Progress): nfs-ganesha: crashes in free_delegrecall_context
- 09:50 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- Reducing priority since we can't seem to get this reproduced.
- 02:49 AM Bug #22293: client may fail to trim as many caps as MDS asked for
- kernel patch https://github.com/ceph/ceph-client/commit/4f9b2bc31681f41fe73ddbabc6e9b9fd047af126
- 02:44 AM Bug #22293 (Fix Under Review): client may fail to trim as many caps as MDS asked for
- https://github.com/ceph/ceph/pull/19271
- 02:22 AM Bug #22293 (Resolved): client may fail to trim as many caps as MDS asked for
- Client::trim_caps() can't trim inode if it has null child dentries. If config option client_cache_size is large, Clie...
11/30/2017
- 11:18 PM Bug #22292 (New): mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Simple reproducer (with selected output):...
- 07:15 PM Tasks #22291 (New): add metadata thrasher to qa suite
- Add a tool to qa suite, possibly based on smallfile (https://github.com/bengland2/smallfile) to run filesystem operat...
- 06:56 PM Bug #22288 (Fix Under Review): mds: assert when inode moves during scrub
- https://github.com/ceph/ceph/pull/19263
- 04:03 PM Bug #22288 (In Progress): mds: assert when inode moves during scrub
- 03:40 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
- If an inode moves while on the scrub stack, it can be enqueued a second time and hit:
mds/CInode.cc: 4153: FAILED ... - 06:51 PM Bug #22221 (Pending Backport): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
- 06:50 PM Bug #22254 (Pending Backport): client: give more descriptive error message for remount failures
- 06:49 PM Bug #22263 (Pending Backport): client reconnect gather race
- 09:58 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> still no clue in the log. Do you still have this issue after restarting mds
Hi Zheng yan:
... - 09:01 AM Bug #21539 (Pending Backport): man: missing man page for mount.fuse.ceph
- 09:01 AM Bug #21991 (Pending Backport): mds: tell session ls returns vanila EINVAL when MDS is not active
11/29/2017
- 02:25 PM Bug #22249: Need to restart MDS to release cephfs space
- still no clue in the log. Do you still have this issue after restarting mds
- 01:59 PM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log ...
11/28/2017
- 11:05 PM Bug #22269 (Fix Under Review): ceph-fuse: failure to remount in startup test does not handle clie...
- https://github.com/ceph/ceph/pull/19218
- 08:41 PM Bug #22269 (Resolved): ceph-fuse: failure to remount in startup test does not handle client_die_o...
- https://github.com/ceph/ceph/blob/38f051c22af1def4a06427876ee2e5000046fd03/src/client/Client.cc#L10063-L10066
The ... - 01:42 PM Bug #22249: Need to restart MDS to release cephfs space
- can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log before mds restart
- 03:08 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> It seems the log was generated by pre-luminous mds. which version of ceph do you use.
OS Ver... - 09:09 AM Bug #22263 (Fix Under Review): client reconnect gather race
- https://github.com/ceph/ceph/pull/19207
- 09:06 AM Bug #22263 (Resolved): client reconnect gather race
- #0 raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000555555ae8bde in reraise_fatal (signum...
11/27/2017
- 10:04 PM Bug #22249 (Need More Info): Need to restart MDS to release cephfs space
- 02:45 PM Bug #22249: Need to restart MDS to release cephfs space
- It seems the log was generated by pre-luminous mds. which version of ceph do you use.
- 07:52 AM Bug #22249 (Can't reproduce): Need to restart MDS to release cephfs space
- I used 'ceph df' to show the usage of the cluster was 238TB (2 copies), however, the result of using 'du -sh' into th...
- 09:21 PM Bug #22003 (Need More Info): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- Please retry with Ganesha -next.
- 07:37 PM Bug #22256: nfs-ganesha: crashes in free_delegrecall_context
- Here's my ganesha.conf as well. I bisected the change down to 46a5e8535f978b1e12dcb15cbdcbf6d5e757d24e (nfs_rpc_call)...
- 07:34 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
- I've been working on delegation support in cephfs for ganesha. The ceph pieces were recently merged, so I rebased my ...
- 06:49 PM Bug #22254 (Fix Under Review): client: give more descriptive error message for remount failures
- https://github.com/ceph/ceph/pull/19181
- 05:53 PM Bug #22254 (Resolved): client: give more descriptive error message for remount failures
- During remount failures:
https://github.com/ceph/ceph/blob/54e51fd3c39a38e72ed989f862e6e21515f41d3b/src/client/Cli... - 01:11 PM Bug #21539 (Fix Under Review): man: missing man page for mount.fuse.ceph
- https://github.com/ceph/ceph/pull/19172
- 02:25 AM Backport #22237 (In Progress): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for...
- https://github.com/ceph/ceph/pull/19157
11/25/2017
- 12:32 AM Backport #22241: jewel: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/19141
- 12:21 AM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/19137
- 12:16 AM Backport #22242: luminous: mds: limit size of subtree migration
- https://github.com/ceph/ceph/pull/19136
11/24/2017
- 09:57 PM Backport #22242 (Resolved): luminous: mds: limit size of subtree migration
- https://github.com/ceph/ceph/pull/20339
- 09:57 PM Backport #22241 (Resolved): jewel: Processes stuck waiting for write with ceph-fuse
- 09:57 PM Backport #22240 (Resolved): luminous: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/20340
- 09:56 PM Backport #22239 (Rejected): luminous: provide a way to look up snapshotted inodes by vinodeno_t
- 09:56 PM Backport #22237 (Resolved): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_re...
- https://github.com/ceph/ceph/pull/19157
- 01:31 PM Bug #21539 (In Progress): man: missing man page for mount.fuse.ceph
11/22/2017
- 09:52 PM Backport #22228: luminous: client: trim_caps may remove cap iterator points to
- https://github.com/ceph/ceph/pull/19105
- 09:49 PM Backport #22228 (In Progress): luminous: client: trim_caps may remove cap iterator points to
- 09:48 PM Backport #22228 (Resolved): luminous: client: trim_caps may remove cap iterator points to
- https://github.com/ceph/ceph/pull/19105
- 09:47 PM Bug #22157 (Pending Backport): client: trim_caps may remove cap iterator points to
- 09:47 PM Bug #22163 (Pending Backport): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()"...
- 09:46 PM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
- 09:45 PM Feature #22105 (Pending Backport): provide a way to look up snapshotted inodes by vinodeno_t
- 09:44 PM Bug #22008 (Pending Backport): Processes stuck waiting for write with ceph-fuse
- 09:43 PM Bug #21892 (Pending Backport): limit size of subtree migration
- 05:36 AM Bug #22221 (Fix Under Review): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
- https://github.com/ceph/ceph/pull/19095
- 05:32 AM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
- ...
- 04:45 AM Bug #22219 (Fix Under Review): mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19092
- 03:53 AM Bug #22219 (Resolved): mds: mds should ignore export_pin for deleted directory
- Otherwise, subtree dirfrag may prevent stray inode from getting purged
11/21/2017
- 04:25 PM Bug #21991: mds: tell session ls returns vanila EINVAL when MDS is not active
- Based on the latest findings, a new PR is created: https://github.com/ceph/ceph/pull/19078
- 05:04 AM Bug #22157 (Fix Under Review): client: trim_caps may remove cap iterator points to
- https://github.com/ceph/ceph/pull/19060
11/20/2017
- 08:18 PM Backport #22192: luminous: MDSMonitor: monitor gives constant "is now active in filesystem cephfs...
- https://github.com/ceph/ceph/pull/19055
- 11:06 AM Backport #22192 (Resolved): luminous: MDSMonitor: monitor gives constant "is now active in filesy...
- https://github.com/ceph/ceph/pull/19055
- 08:00 PM Documentation #22204 (Resolved): doc: scrub_path is missing in the docs
- Should go here: http://docs.ceph.com/docs/master/cephfs/disaster-recovery/
- 06:13 PM Bug #21765 (Fix Under Review): auth|doc: fs authorize error for existing credentials confusing/un...
- Added to https://github.com/ceph/ceph/pull/17678
- 02:41 PM Bug #22003: [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- I recently added a patch to ganesha:...
- 02:37 PM Bug #21991: mds: tell session ls returns vanila EINVAL when MDS is not active
- See also: https://github.com/ceph/ceph/blob/master/src/mds/MDSDaemon.cc#L795
- 12:42 AM Bug #22163 (Fix Under Review): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()"...
- https://github.com/ceph/ceph/pull/19018
- 12:28 AM Bug #22163 (Resolved): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may han...
11/19/2017
- 12:29 PM Bug #22058 (Fix Under Review): mds: admin socket wait for scrub completion is racy
- https://github.com/ceph/ceph/pull/19014
- 03:01 AM Bug #22157 (In Progress): client: trim_caps may remove cap iterator points to
11/18/2017
- 05:02 AM Bug #21991: mds: tell session ls returns vanila EINVAL when MDS is not active
- It doesn't matter whether mds is active or inactive. The actual issue is: the tell mds command doesn't print the outp...
11/17/2017
- 07:35 PM Bug #22157 (Resolved): client: trim_caps may remove cap iterator points to
- ...
- 04:07 PM Bug #21959 (Pending Backport): MDSMonitor: monitor gives constant "is now active in filesystem ce...
- 09:01 AM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- 09:00 AM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- patch for mds:
https://github.com/ceph/ceph/pull/18942
patch for kernel:
commit "ceph: snapshot nfs re-export"
...
11/15/2017
- 05:52 PM Bug #22091 (Duplicate): statfs get wrong fs size
- 08:13 AM Backport #22077 (In Progress): luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() ...
- 04:15 AM Backport #21948: luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- -https://github.com/ceph/ceph/pull/18928-
https://github.com/ceph/ceph/pull/18933
- 03:11 AM Backport #21948: luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- -https://github.com/ceph/ceph/pull/18928-
- 03:30 AM Backport #21870: luminous: Assertion in EImportStart::replay should be a damaged()
- https://github.com/ceph/ceph/pull/18930
- 03:27 AM Backport #21874: luminous: qa: libcephfs_interface_tests: shutdown race failures
- -https://github.com/ceph/ceph/pull/18929-
11/14/2017
- 05:42 PM Bug #17563: extremely slow ceph_fsync calls
- Ah thanks, I read that as 4.1 for some reason, my bad!
- 12:22 PM Bug #17563: extremely slow ceph_fsync calls
- As Zheng says, the kernel you're using doesn't have the fix for this. You need v4.10 or above (or backport the series...
- 01:37 AM Bug #17563: extremely slow ceph_fsync calls
- your kernel does not include the fix, try ceph-fuse
- 01:00 AM Bug #17563: extremely slow ceph_fsync calls
- I've created a test cluster (3 nodes, 9 osds spread across, mounting CephFS on 4th machine), and upgraded it to lumin...
- 02:51 PM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- FYI: here is my working-in-progress kernel patch for exporting snapped inode.
- 03:17 AM Feature #22105 (In Progress): provide a way to look up snapshotted inodes by vinodeno_t
- 01:16 AM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- Jeff Layton wrote:
> Is it possible though that a file like that could be renamed into a different directory? If s... - 06:07 AM Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.s...
- https://github.com/ceph/ceph/pull/18912
11/13/2017
- 04:24 PM Backport #21955: luminous: qa: add EC data pool to testing
- The master PR has two commits, but only one got cherry-picked in the original backport.
The second commit was back... - 02:57 PM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- Zheng Yan wrote:
> Is it possible to make nfs client remember snapped inode's parent directory (and hash of corres... - 02:49 PM Feature #22105: provide a way to look up snapshotted inodes by vinodeno_t
- For directory inode, snapped inode are always stored together with head inode. The hard part is non-directory inode, ...
- 02:28 PM Bug #21393 (Fix Under Review): MDSMonitor: inconsistent role/who usage in command help
- 08:58 AM Bug #3370: All nfsd hung trying to lock page(s) on export of kclient ceph
- https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8884d53dd63b1d9315b343564fcbe1ede004a99...
- 03:40 AM Bug #3370: All nfsd hung trying to lock page(s) on export of kclient ceph
- David Zafman wrote:
> commit: 2978257c56935878f8a756c6cb169b569e99bb91
I can't find this commit? can some body gi...
11/10/2017
- 09:39 PM Bug #22008: Processes stuck waiting for write with ceph-fuse
- I've applied this patch to the latest luminous branch, rebuild the MDS and tested it in a test environment with the c...
- 02:47 PM Feature #22105 (Resolved): provide a way to look up snapshotted inodes by vinodeno_t
- An NFS client could conceivably present a filehandle that refers to a snapshot inode after ganesha has been stopped a...
- 07:22 AM Backport #21947: luminous: mds: preserve order of requests during recovery of multimds cluster
- https://github.com/ceph/ceph/pull/18871
- 07:17 AM Backport #21952: luminous: mds: no assertion on inode being purging in find_ino_peers()
- https://github.com/ceph/ceph/pull/18869
- 04:44 AM Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.s...
- ** IN PROGRESS **
- 02:21 AM Feature #22097: mds: change mds perf counters can statistics filesystem operations number and lat...
- PR https://github.com/ceph/ceph/pull/18849 is for this feature
- 12:17 AM Backport #22078: luminous: ceph.in: tell mds does not understand --cluster
- -https://github.com/ceph/ceph/pull/18859-
- 12:15 AM Backport #22089: luminous: Scrub considers dirty backtraces to be damaged, puts in damage table e...
- https://github.com/ceph/ceph/pull/18858
11/09/2017
- 12:05 PM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
- The perf conters of mds daemon now only can statistics filesystem op number and the all replay latency.Sometimes we n...
- 06:07 AM Backport #22074: luminous: don't check gid when none specified in auth caps
- https://github.com/ceph/ceph/pull/18835
- 05:23 AM Backport #22076: luminous: 'ceph tell mds' commands result in 'File exists' errors on client admi...
- https://github.com/ceph/ceph/pull/18831
- 03:05 AM Bug #22091 (Duplicate): statfs get wrong fs size
- For fs size in statfs, the right value should be the size of data_pool plus size of metadata_pool, however, it return...
11/08/2017
- 10:45 PM Backport #22089 (Resolved): luminous: Scrub considers dirty backtraces to be damaged, puts in dam...
- https://github.com/ceph/ceph/pull/20341
- 10:28 AM Backport #22078 (Resolved): luminous: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/18831
- 10:26 AM Backport #22077 (Resolved): luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == ...
- https://github.com/ceph/ceph/pull/18912
- 10:25 AM Backport #22076 (Resolved): luminous: 'ceph tell mds' commands result in 'File exists' errors on ...
- https://github.com/ceph/ceph/pull/18831
- 10:25 AM Backport #22074 (Resolved): luminous: don't check gid when none specified in auth caps
- https://github.com/ceph/ceph/pull/18835
- 01:13 AM Bug #22058 (Need More Info): mds: admin socket wait for scrub completion is racy
- no log, wait for it to happen again.
Also available in: Atom