Project

General

Profile

Activity

From 08/07/2017 to 09/05/2017

09/05/2017

09:48 PM Bug #21252: mds: asok command error merged with partial Formatter output
I should note: the error itself is very concerning because the only way for dump_cache to fail is if it's operating o... Patrick Donnelly
09:46 PM Bug #21252 (Fix Under Review): mds: asok command error merged with partial Formatter output
https://github.com/ceph/ceph/pull/17506 Patrick Donnelly
08:20 PM Bug #21252 (Resolved): mds: asok command error merged with partial Formatter output
... Patrick Donnelly
09:35 PM Bug #21222 (Fix Under Review): MDS: standby-replay mds should avoid initiating subtree export
Patrick Donnelly
03:39 PM Bug #16709 (Resolved): No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
Nathan Cutler
03:38 PM Bug #18660 (Resolved): fragment space check can cause replayed request fail
Nathan Cutler
03:38 PM Bug #18661 (Resolved): Test failure: test_open_inode
Nathan Cutler
03:38 PM Bug #18877 (Resolved): mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stra...
Nathan Cutler
03:38 PM Bug #18941 (Resolved): buffer overflow in test LibCephFS.DirLs
Nathan Cutler
03:38 PM Bug #19118 (Resolved): MDS heartbeat timeout during rejoin, when working with large amount of cap...
Nathan Cutler
03:37 PM Bug #19406 (Resolved): MDS server crashes due to inconsistent metadata.
Nathan Cutler
03:32 PM Bug #19955 (Resolved): Too many stat ops when MDS trying to probe a large file
Nathan Cutler
03:32 PM Backport #20149 (Rejected): kraken: Too many stat ops when MDS trying to probe a large file
Kraken is EOL. Nathan Cutler
03:32 PM Bug #20055 (Resolved): Journaler may execute on_safe contexts prematurely
Nathan Cutler
03:31 PM Backport #20141 (Rejected): kraken: Journaler may execute on_safe contexts prematurely
Kraken is EOL.Kraken is EOL. Nathan Cutler
11:07 AM Bug #20988: client: dual client segfault with racing ceph_shutdown
Hmm. I'm not sure that really helps. Here's the doc comment over ceph_create_with_context:... Jeff Layton
09:46 AM Bug #20988: client: dual client segfault with racing ceph_shutdown
I found a workaround. We can create single CephContext for multiple ceph_mount.... Zheng Yan
09:35 AM Backport #21114 (In Progress): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
Nathan Cutler
09:33 AM Backport #21112 (In Progress): luminous: get_quota_root sends lookupname op for every buffered write
Nathan Cutler
09:28 AM Backport #21107 (In Progress): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
Nathan Cutler
09:22 AM Backport #21103 (In Progress): luminous: client: missing space in some client debug log messages
Nathan Cutler
08:45 AM Bug #21193 (Duplicate): ceph.in: `ceph tell mds.* injectargs` does not update standbys
http://tracker.ceph.com/issues/21230 Chang Liu
08:40 AM Bug #21230 (Fix Under Review): the standbys are not updated via "ceph tell mds.* command"
https://github.com/ceph/ceph/pull/17463 Kefu Chai
07:59 AM Bug #21230 (Resolved): the standbys are not updated via "ceph tell mds.* command"
Chang Liu

09/04/2017

07:11 PM Bug #20178 (Resolved): df reports negative disk "used" value when quota exceed
Nathan Cutler
07:11 PM Backport #20350 (Rejected): kraken: df reports negative disk "used" value when quota exceed
Kraken is EOL. Nathan Cutler
07:11 PM Backport #20349 (Resolved): jewel: df reports negative disk "used" value when quota exceed
Nathan Cutler
07:10 PM Bug #20340 (Resolved): cephfs permission denied until second client accesses file
Nathan Cutler
07:10 PM Backport #20404 (Rejected): kraken: cephfs permission denied until second client accesses file
Kraken is EOL. Nathan Cutler
07:10 PM Backport #20403 (Resolved): jewel: cephfs permission denied until second client accesses file
Nathan Cutler
06:43 PM Bug #21221 (Fix Under Review): MDCache::try_subtree_merge() may print N^2 lines of debug message
https://github.com/ceph/ceph/pull/17456 Patrick Donnelly
09:11 AM Bug #21221 (Resolved): MDCache::try_subtree_merge() may print N^2 lines of debug message
MDCache::try_subtree_merge(dirfrag) calls MDCache::try_subtree_merge_at() for each subtree in the dirfrag. try_subtre... Zheng Yan
11:14 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
Here is a merge request for this bug fix: https://github.com/ceph/ceph/pull/17452, could you have a review? @Patrick Jianyu Li
11:11 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
Although for the latest code in master branch, this issue could be avoided by the destination check in export_dir:
...
Jianyu Li
10:24 AM Bug #21222 (Resolved): MDS: standby-replay mds should avoid initiating subtree export
For jewel-10.2.7 version, use two active mds and two related standby-replay mds.
When standby-replay replays the m...
Jianyu Li

08/31/2017

01:06 PM Feature #18490: client: implement delegation support in userland cephfs
Here's a capture showing the delegation grant and recall (what can I say, I'm a proud parent). The delegation was rev... Jeff Layton
12:47 PM Feature #18490: client: implement delegation support in userland cephfs
I was able to get ganesha to hand out a v4.0 delegation today and recall it properly. So, PoC is successful!
There s...
Jeff Layton
11:12 AM Backport #21113 (In Progress): jewel: get_quota_root sends lookupname op for every buffered write
Nathan Cutler

08/30/2017

11:17 PM Bug #21193 (Duplicate): ceph.in: `ceph tell mds.* injectargs` does not update standbys
... Patrick Donnelly
10:54 PM Bug #21191 (Fix Under Review): ceph: tell mds.* results in warning
https://github.com/ceph/ceph/pull/17384 Patrick Donnelly
10:19 PM Bug #21191: ceph: tell mds.* results in warning
John believes 9753a0065db8bfb03d86a7185bc636c7aa4c7af7 may be the cause. Patrick Donnelly
10:15 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
... Patrick Donnelly

08/29/2017

03:31 PM Documentation #21172 (Duplicate): doc: Export over NFS
Create a document similar to RGW's NFS support, https://github.com/ceph/ceph/blob/master/doc/radosgw/nfs.rst
to help...
Ramana Raja
10:08 AM Bug #21168 (Fix Under Review): cap import/export message ordering issue
https://github.com/ceph/ceph/pull/17340 Zheng Yan
09:59 AM Bug #21168 (Resolved): cap import/export message ordering issue
there are cap import/export message ordering issue
symptoms are:
kernel prints error "handle_cap_import: mismat...
Zheng Yan

08/28/2017

05:54 PM Feature #18490: client: implement delegation support in userland cephfs
I made some progress today. I got ganesha over ceph to hand out a read delegation. Once I tried to force a recall (by... Jeff Layton
01:51 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
opening inode during rejoin stage is slow when clients have large number of caps.
Currently mds journal open inode...
Zheng Yan
01:42 PM Bug #21083: client: clean up header to isolate real public methods and entry points for client_lock
Should be reorganized with an eye toward finer grained locks, along with a client_lock audit. -Jeff Patrick Donnelly
01:36 PM Bug #21058: mds: remove UNIX file permissions binary dependency
May not be necessary as bits are defined by POSIX. Should still look for other dependencies which may vary. Patrick Donnelly
12:54 PM Bug #21153 (Fix Under Review): Incorrect grammar in FS message "1 filesystem is have a failed mds...
https://github.com/ceph/ceph/pull/17301 John Spray
12:52 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
John Spray
06:53 AM Bug #21149: SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
I think the exception was triggered by writing the debug message before reading ceph config.
*PR* https://github.com...
shangzhong zhu
06:49 AM Bug #21149 (Rejected): SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
When I run the Hadoop write test, the following exception occurs(NOT 100%):
/clove/vm/renhw/ceph/rpmbuild/BUILD/ce...
shangzhong zhu
03:34 AM Bug #21070 (Fix Under Review): MDS: MDS is laggy or crashed When deleting a large number of files
https://github.com/ceph/ceph/pull/17291 Zheng Yan
03:17 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
for seeky readdir on large directory
int fd = open("/mnt/ceph", O_RDONLY | O_DIRECTORY)
lseek(fd, xxxxxx, SEEK_SE...
Zheng Yan

08/26/2017

02:29 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
your patch can solve this problem, I test twice today.:-)
my modification is just to verify the dirfrag offset cau...
huanwen ren

08/25/2017

09:29 PM Bug #20535 (Resolved): mds segmentation fault ceph_lock_state_t::get_overlapping_locks
Webert, the backport is merged so I'm marking this as resolved. If you experience this particular issue again, please... Patrick Donnelly
09:27 PM Backport #20564 (Resolved): jewel: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
Patrick Donnelly
09:22 PM Bug #20596 (Fix Under Review): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
https://github.com/ceph/ceph/pull/17266 Patrick Donnelly
01:33 PM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
my patch alone does not work? your change will make seeky readdir on directory inefficiency Zheng Yan
08:28 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
In addition, I set "offset_hash" to 0 when offset_str is empty, which can solve this problem.
Modify the code as fol...
huanwen ren
08:20 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
ok,I will test it
huanwen ren
02:02 AM Bug #21091: StrayManager::truncate is broken
yes Zheng Yan

08/24/2017

09:52 PM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
Patrick Donnelly
09:52 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
Patrick Donnelly
04:58 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
https://github.com/ceph/ceph/pull/17240 Patrick Donnelly
09:52 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
Patrick Donnelly
09:52 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
Patrick Donnelly
04:51 PM Backport #21101: luminous: FSCommands: missing wait for osdmap writeable + propose
https://github.com/ceph/ceph/pull/17238 Patrick Donnelly
04:48 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
Patrick Donnelly
09:51 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
Patrick Donnelly
03:49 AM Bug #21065 (Pending Backport): client: UserPerm delete with supp. groups allocated by malloc gene...
Patrick Donnelly
09:51 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
Patrick Donnelly
04:51 PM Backport #21100: luminous: client: UserPerm delete with supp. groups allocated by malloc generate...
https://github.com/ceph/ceph/pull/17237 Patrick Donnelly
04:46 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
Patrick Donnelly
09:51 PM Bug #21078 (Resolved): df hangs in ceph-fuse
Patrick Donnelly
03:49 AM Bug #21078 (Pending Backport): df hangs in ceph-fuse
Patrick Donnelly
09:51 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
Patrick Donnelly
04:45 PM Backport #21099: luminous: client: df hangs in ceph-fuse
https://github.com/ceph/ceph/pull/17236 Patrick Donnelly
04:40 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
Patrick Donnelly
09:51 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
Patrick Donnelly
03:50 AM Bug #21082 (Pending Backport): client: the client_lock is not taken for Client::getcwd
Patrick Donnelly
09:50 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
Patrick Donnelly
04:43 PM Backport #21098: luminous: client: the client_lock is not taken for Client::getcwd
https://github.com/ceph/ceph/pull/17235 Patrick Donnelly
04:37 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
Patrick Donnelly
07:05 PM Bug #21091: StrayManager::truncate is broken
This only affects deletions of snapshotted files right? Patrick Donnelly
09:10 AM Bug #21091 (Fix Under Review): StrayManager::truncate is broken
https://github.com/ceph/ceph/pull/17219 Zheng Yan
08:56 AM Bug #21091 (Resolved): StrayManager::truncate is broken
Zheng Yan
05:23 PM Backport #21114 (Resolved): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
https://github.com/ceph/ceph/pull/17474 Nathan Cutler
05:23 PM Backport #21113 (Resolved): jewel: get_quota_root sends lookupname op for every buffered write
https://github.com/ceph/ceph/pull/17396 Nathan Cutler
05:23 PM Backport #21112 (Resolved): luminous: get_quota_root sends lookupname op for every buffered write
https://github.com/ceph/ceph/pull/17473 Nathan Cutler
05:23 PM Backport #21107 (Resolved): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
https://github.com/ceph/ceph/pull/17471 Nathan Cutler
05:22 PM Backport #21103 (Resolved): luminous: client: missing space in some client debug log messages
https://github.com/ceph/ceph/pull/17469 Nathan Cutler
10:49 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
please try the attached patch Zheng Yan
08:45 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
The address of *dn(mds.0.server dn1-10x600000000000099) is overflowed,
but have not found the reason.
huanwen ren
08:42 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
I added some print debugging information, and can be reproduced:
1.The right to print...
huanwen ren

08/23/2017

08:52 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
Webert Lima wrote:
> My active MDS had committed suicide due to "dne in mds map" (this is happening a lot but I don'...
Patrick Donnelly
08:18 PM Bug #21065 (Fix Under Review): client: UserPerm delete with supp. groups allocated by malloc gene...
https://github.com/ceph/ceph/pull/17204 Patrick Donnelly
07:43 PM Bug #21082 (Fix Under Review): client: the client_lock is not taken for Client::getcwd
https://github.com/ceph/ceph/pull/17205 Patrick Donnelly
03:57 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
https://github.com/ceph/ceph/blob/db16d50cc56f5221d7bcdb28a29d5e0a456cba94/src/client/Client.cc#L9387-L9425
We als...
Patrick Donnelly
06:13 PM Feature #16016 (Resolved): Populate DamageTable from forward scrub
Nathan Cutler
06:12 PM Backport #20294 (Resolved): jewel: Populate DamageTable from forward scrub
Nathan Cutler
06:12 PM Feature #18509 (Resolved): MDS: damage reporting by ino number is useless
Nathan Cutler
06:12 PM Backport #19679 (Resolved): jewel: MDS: damage reporting by ino number is useless
Nathan Cutler
06:10 PM Bug #19291 (Resolved): mds: log rotation doesn't work if mds has respawned
Nathan Cutler
06:10 PM Backport #19466 (Resolved): jewel: mds: log rotation doesn't work if mds has respawned
Nathan Cutler
05:57 PM Cleanup #21069 (Pending Backport): client: missing space in some client debug log messages
Patrick Donnelly
02:51 AM Cleanup #21069: client: missing space in some client debug log messages
*PR*: https://github.com/ceph/ceph/pull/17175 shangzhong zhu
02:45 AM Cleanup #21069 (Resolved): client: missing space in some client debug log messages
2017-08-11 19:05:17.344361 7fb87b1eb700 20 client.15557 may_delete0x10000000522.head(faked_ino=0 ref=3 ll_ref=0 cap_r... shangzhong zhu
04:52 PM Bug #21064 (Pending Backport): FSCommands: missing wait for osdmap writeable + propose
Patrick Donnelly
04:06 PM Bug #21083 (New): client: clean up header to isolate real public methods and entry points for cli...
With the recent revelation that the client_lock was not locked for Client::getcwd [1] and other history of missing lo... Patrick Donnelly
02:10 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
Patrick Donnelly
02:08 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
https://github.com/ceph/ceph/pull/17163 Abhishek Lekshmanan
02:08 PM Bug #20945 (Pending Backport): get_quota_root sends lookupname op for every buffered write
Patrick Donnelly
02:06 PM Bug #21004 (Pending Backport): fs: client/mds has wrong check to clear S_ISGID on chown
Patrick Donnelly
01:55 PM Bug #21078 (Fix Under Review): df hangs in ceph-fuse
https://github.com/ceph/ceph/pull/17199 John Spray
01:49 PM Bug #21078: df hangs in ceph-fuse
yep. mon says:... John Spray
01:48 PM Bug #21078: df hangs in ceph-fuse
Loops like this:... John Spray
01:42 PM Bug #21078 (Resolved): df hangs in ceph-fuse
See "[ceph-users] ceph-fuse hanging on df with ceph luminous >= 12.1.3".
The filesystem works normally, except for...
John Spray
01:51 PM Bug #20892 (Pending Backport): qa: FS_DEGRADED spurious health warnings in some sub-suites
Patrick Donnelly
11:12 AM Backport #21067: jewel: MDS integer overflow fix
OK, backport staged (see description) Nathan Cutler
11:11 AM Backport #21067 (In Progress): jewel: MDS integer overflow fix
Nathan Cutler
11:09 AM Backport #21067: jewel: MDS integer overflow fix
h3. description
Please backport commit 0d74334332fb70212fc71f1130e886952920038d (mds: use client_t instead of int ...
Nathan Cutler
06:47 AM Bug #19755 (Resolved): MDS became unresponsive when truncating a very large file
Nathan Cutler
06:42 AM Backport #20025 (Resolved): jewel: MDS became unresponsive when truncating a very large file
Nathan Cutler
04:21 AM Bug #21071: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_...
Doug, please take this one. Patrick Donnelly
04:20 AM Bug #21071 (Resolved): qa: test_misc creates metadata pool with dummy object resulting in WRN: PO...
... Patrick Donnelly
03:32 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
I open the log to try to view the problem where the log information is as follows:... huanwen ren
03:06 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
We plan to use mdtest to create 100w level of the file, in the ceph-fuse mount the directory, the command is as follo... huanwen ren

08/22/2017

10:02 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
https://github.com/ceph/ceph/pull/17188 Thorvald Natvig
09:44 PM Bug #21066 (New): qa: racy test_export_pin check for export_targets
... Patrick Donnelly
08:59 PM Bug #21065: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error
We'll need to convert the UserPerm constructor and such to use malloc/free. ceph_userperm_new can be called from C co... Jeff Layton
08:30 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
... Patrick Donnelly
07:13 PM Bug #21064 (Fix Under Review): FSCommands: missing wait for osdmap writeable + propose
https://github.com/ceph/ceph/pull/17163 Patrick Donnelly
07:09 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
... Patrick Donnelly
04:04 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
Nathan Cutler wrote:
> Patrick, do you mean that the following three PRs should be backported in a single PR targeti...
Patrick Donnelly
07:09 AM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
Patrick, do you mean that the following three PRs should be backported in a single PR targeting luminous?
* https:...
Nathan Cutler
12:36 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
Note: the failure is transient (occurred in 2 out of 5 runs so far). Nathan Cutler
11:37 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
Hello CephFS developers, I am reproducing this bug in the latest jewel integration branch. Here are the prime suspect... Nathan Cutler

08/21/2017

09:41 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
Nathan, the fix for http://tracker.ceph.com/issues/21027 should also make it into Luminous with this backport. I'm go... Patrick Donnelly
04:13 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
Nathan Cutler
08:53 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
The MDS has various file permission/type bits pulled from UNIX headers. These could be different depending on what sy... Patrick Donnelly
07:46 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Moving this to main "Ceph" project as it looks more like a problem in the AdminSocket code. The thing seems to mainly... Jeff Layton
06:13 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Here's a testcase that seems to trigger it fairly reliably. You may have to run it a few times to get it to crash but... Jeff Layton
05:03 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Correct. I'll see if I can roll up a testcase for this when I get a few mins. Jeff Layton
04:50 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Jeff, just confirming this bug is with two client instances and not one instance with two threads? Patrick Donnelly
02:59 PM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
Patch merged: https://github.com/ceph/ceph/pull/17044 Bara Ancincova
01:49 PM Feature #18490: client: implement delegation support in userland cephfs
The latest set has timeout support that basically does a client->unmount() on the thing. With the patches for this bu... Jeff Layton
11:01 AM Feature #18490: client: implement delegation support in userland cephfs
For the clean-ish shutdown case, it would be neat to have a common code path with the -EBLACKLISTED handling (see Cli... John Spray
01:43 PM Bug #21025: racy is_mounted() checks in libcephfs
PR is here:
https://github.com/ceph/ceph/pull/17095
Jeff Layton
01:40 PM Bug #21004 (Fix Under Review): fs: client/mds has wrong check to clear S_ISGID on chown
Patrick Donnelly
11:41 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
Reporting in, I've had the first incident after the version upgrade.
My active MDS had committed suicide due to "d...
Webert Lima
09:03 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
kcephfs suite has similar issue:
http://pulpito.ceph.com/teuthology-2017-08-19_05:20:01-kcephfs-luminous-testing-bas...
Zheng Yan

08/17/2017

06:41 PM Feature #19109 (Resolved): Use data pool's 'df' for statfs instead of global stats, if there is o...
Oh, oops. I forgot I merged this into luminous. Thanks Doug. Patrick Donnelly
06:22 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
There's no need to wait for the kernel client since the message encoding is versioned. This has already been merged i... Douglas Fuller
06:14 PM Feature #19109 (Pending Backport): Use data pool's 'df' for statfs instead of global stats, if th...
Waiting for
https://github.com/ceph/ceph-client/commit/b7f94d6a95dfe2399476de1e0d0a7c15c01611d0
to be merged up...
Patrick Donnelly
03:15 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
libcephfs.cc has a bunch of is_mounted checks like this in it:... Jeff Layton
03:02 PM Feature #18490: client: implement delegation support in userland cephfs
Patrick Donnelly wrote:
> here "client" means Ganesha. What about how does Ganesha handle its client not releasing...
Jeff Layton

08/16/2017

11:08 PM Feature #18490: client: implement delegation support in userland cephfs
Jeff Layton wrote:
> The main work to be done at this point is handling clients that don't return the delegation in ...
Patrick Donnelly
12:57 PM Feature #18490: client: implement delegation support in userland cephfs
I've been working on this for the last week or so, so this is a good place to pause and provide an update:
I have ...
Jeff Layton
09:47 PM Bug #20990 (Pending Backport): mds,mgr: add 'is_valid=false' when failed to parse caps
Patrick Donnelly
06:48 PM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
I think we want just the inital log message for each ll_ operation and not the helpers (e.g. _rmdir).
See: http://...
Patrick Donnelly
06:23 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
https://github.com/ceph/ceph/pull/17053 Patrick Donnelly
02:58 AM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
The logical is from kernel_src/fs/attr.c... Zheng Yan
09:54 AM Bug #21007 (Fix Under Review): The ceph fs set mds_max command must be udpated
Created this PR: https://github.com/ceph/ceph/pull/17044 Bara Ancincova
08:14 AM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
Copied from Bugzilla:
Ramakrishnan Periyasamy 2017-08-16 09:14:21 CEST
Description of problem:
Upstream docume...
Bara Ancincova
01:20 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
xie xingguo

08/15/2017

09:58 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
Well actually the test above fails because the chown was a no-op due to an earlier chown failure. In any case I've fo... Patrick Donnelly
09:41 PM Bug #21004 (In Progress): fs: client/mds has wrong check to clear S_ISGID on chown
Patrick Donnelly
09:41 PM Bug #21004 (Resolved): fs: client/mds has wrong check to clear S_ISGID on chown
Reported in: https://bugzilla.redhat.com/show_bug.cgi?id=1480182
This causes the failure in test 88 from https://b...
Patrick Donnelly
09:12 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
please close it
no err
huanwen ren
08:59 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
... huanwen ren
08:58 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
err:... huanwen ren

08/13/2017

03:28 AM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
mds,mgr: add 'is_valid=false' when failed to parse caps.
Backport needed for the PRs:
https://github.com/ceph/cep...
Jos Collin

08/12/2017

01:44 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
I have a testcase that I'm working on that has two threads, each with their own ceph_mount_info. If those threads end... Jeff Layton

08/11/2017

03:18 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
... Zheng Yan

08/10/2017

02:27 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
Ahh yeah, I remember seeing that in there a while back. I guess the danger is that we can end up instantiating an ino... Jeff Layton
02:29 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
what worry me is comment in fuse_lowlevel.h... Zheng Yan
12:27 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
Thanks, Dan! Jewel backport staged: https://github.com/ceph/ceph/pull/16963 Nathan Cutler
12:26 PM Backport #20972 (In Progress): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log:...
Nathan Cutler
12:25 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
h3. description
10.2.9 instroduces a regression where ceph-fuse will segfault at mount time because of an attempt ...
Nathan Cutler
11:52 AM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
Confirmed that 10.2.9 plus cbf18b1d80d214e4203e88637acf4b0a0a201ee7 does not segfault. Dan van der Ster
09:04 AM Backport #20972 (Resolved): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
https://github.com/ceph/ceph/pull/16963 Dan van der Ster
12:24 PM Bug #18157 (Pending Backport): ceph-fuse segfaults on daemonize
Nathan Cutler
09:42 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
Could you also please add the luminous backport tag for this? Dan van der Ster
09:23 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
https://github.com/ceph/ceph/pull/16959 Dan van der Ster
02:08 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
lookupname is for following case:
directory /a /b have non-default quota
client A is writing /a/file
client ...
Zheng Yan

08/09/2017

04:39 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
This seems to work... Dan van der Ster
10:45 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
Thanks. You're right. Here's the trivial reproducer:... Dan van der Ster
08:46 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
enabling quota and writing to unlinked file can produce this easily. get_quota_root() uses dentry in dn_set if it has... Zheng Yan
01:57 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
FUSE is the only caller of ->ll_lookup so a simpler fix might be to just change the mask field to 0 in the _lookup ca... Jeff Layton
10:39 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
this slowness is due to limitation of fuse API. The attached patch is a workaround. (not 100% sure it doesn't break a... Zheng Yan

08/08/2017

05:35 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
Partially resolved by: https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d Patrick Donnelly
02:25 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
I'm only running the fuse client. I see the problem both on Jewel (10.2.9 servers + fuse client) and on Luminous RC ... Andras Pataki
02:22 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
I tried on latest Luminous RC + 4.12 kernel client. I got about 7000 opens/second in two nodes read-write case.
did ...
Zheng Yan
02:00 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
Our user confirmed that without client-quota their job finishes quickly:... Dan van der Ster
01:46 PM Bug #20945 (Resolved): get_quota_root sends lookupname op for every buffered write
We have a CAD use-case (hspice) which sees very slow buffered writes, apparently due to the quota code. (We haven't y... Dan van der Ster

08/07/2017

05:15 PM Feature #20885: add syntax for generating OSD/MDS auth caps for cephfs
PR to master was https://github.com/ceph/ceph/pull/16761 Ken Dreyer
03:33 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
When accessing the same file opened for read/write on multiple nodes via ceph-fuse, performance drops by about 3 orde... Andras Pataki
 

Also available in: Atom