Activity
From 08/06/2017 to 09/04/2017
09/04/2017
- 07:11 PM Bug #20178 (Resolved): df reports negative disk "used" value when quota exceed
- 07:11 PM Backport #20350 (Rejected): kraken: df reports negative disk "used" value when quota exceed
- Kraken is EOL.
- 07:11 PM Backport #20349 (Resolved): jewel: df reports negative disk "used" value when quota exceed
- 07:10 PM Bug #20340 (Resolved): cephfs permission denied until second client accesses file
- 07:10 PM Backport #20404 (Rejected): kraken: cephfs permission denied until second client accesses file
- Kraken is EOL.
- 07:10 PM Backport #20403 (Resolved): jewel: cephfs permission denied until second client accesses file
- 06:43 PM Bug #21221 (Fix Under Review): MDCache::try_subtree_merge() may print N^2 lines of debug message
- https://github.com/ceph/ceph/pull/17456
- 09:11 AM Bug #21221 (Resolved): MDCache::try_subtree_merge() may print N^2 lines of debug message
- MDCache::try_subtree_merge(dirfrag) calls MDCache::try_subtree_merge_at() for each subtree in the dirfrag. try_subtre...
- 11:14 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
- Here is a merge request for this bug fix: https://github.com/ceph/ceph/pull/17452, could you have a review? @Patrick
- 11:11 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
- Although for the latest code in master branch, this issue could be avoided by the destination check in export_dir:
... - 10:24 AM Bug #21222 (Resolved): MDS: standby-replay mds should avoid initiating subtree export
- For jewel-10.2.7 version, use two active mds and two related standby-replay mds.
When standby-replay replays the m...
08/31/2017
- 01:06 PM Feature #18490: client: implement delegation support in userland cephfs
- Here's a capture showing the delegation grant and recall (what can I say, I'm a proud parent). The delegation was rev...
- 12:47 PM Feature #18490: client: implement delegation support in userland cephfs
- I was able to get ganesha to hand out a v4.0 delegation today and recall it properly. So, PoC is successful!
There s... - 11:12 AM Backport #21113 (In Progress): jewel: get_quota_root sends lookupname op for every buffered write
08/30/2017
- 11:17 PM Bug #21193 (Duplicate): ceph.in: `ceph tell mds.* injectargs` does not update standbys
- ...
- 10:54 PM Bug #21191 (Fix Under Review): ceph: tell mds.* results in warning
- https://github.com/ceph/ceph/pull/17384
- 10:19 PM Bug #21191: ceph: tell mds.* results in warning
- John believes 9753a0065db8bfb03d86a7185bc636c7aa4c7af7 may be the cause.
- 10:15 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
- ...
08/29/2017
- 03:31 PM Documentation #21172 (Duplicate): doc: Export over NFS
- Create a document similar to RGW's NFS support, https://github.com/ceph/ceph/blob/master/doc/radosgw/nfs.rst
to help... - 10:08 AM Bug #21168 (Fix Under Review): cap import/export message ordering issue
- https://github.com/ceph/ceph/pull/17340
- 09:59 AM Bug #21168 (Resolved): cap import/export message ordering issue
- there are cap import/export message ordering issue
symptoms are:
kernel prints error "handle_cap_import: mismat...
08/28/2017
- 05:54 PM Feature #18490: client: implement delegation support in userland cephfs
- I made some progress today. I got ganesha over ceph to hand out a read delegation. Once I tried to force a recall (by...
- 01:51 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
- opening inode during rejoin stage is slow when clients have large number of caps.
Currently mds journal open inode... - 01:42 PM Bug #21083: client: clean up header to isolate real public methods and entry points for client_lock
- Should be reorganized with an eye toward finer grained locks, along with a client_lock audit. -Jeff
- 01:36 PM Bug #21058: mds: remove UNIX file permissions binary dependency
- May not be necessary as bits are defined by POSIX. Should still look for other dependencies which may vary.
- 12:54 PM Bug #21153 (Fix Under Review): Incorrect grammar in FS message "1 filesystem is have a failed mds...
- https://github.com/ceph/ceph/pull/17301
- 12:52 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
- 06:53 AM Bug #21149: SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
- I think the exception was triggered by writing the debug message before reading ceph config.
*PR* https://github.com... - 06:49 AM Bug #21149 (Rejected): SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
- When I run the Hadoop write test, the following exception occurs(NOT 100%):
/clove/vm/renhw/ceph/rpmbuild/BUILD/ce... - 03:34 AM Bug #21070 (Fix Under Review): MDS: MDS is laggy or crashed When deleting a large number of files
- https://github.com/ceph/ceph/pull/17291
- 03:17 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- for seeky readdir on large directory
int fd = open("/mnt/ceph", O_RDONLY | O_DIRECTORY)
lseek(fd, xxxxxx, SEEK_SE...
08/26/2017
- 02:29 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- your patch can solve this problem, I test twice today.:-)
my modification is just to verify the dirfrag offset cau...
08/25/2017
- 09:29 PM Bug #20535 (Resolved): mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert, the backport is merged so I'm marking this as resolved. If you experience this particular issue again, please...
- 09:27 PM Backport #20564 (Resolved): jewel: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- 09:22 PM Bug #20596 (Fix Under Review): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
- https://github.com/ceph/ceph/pull/17266
- 01:33 PM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- my patch alone does not work? your change will make seeky readdir on directory inefficiency
- 08:28 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- In addition, I set "offset_hash" to 0 when offset_str is empty, which can solve this problem.
Modify the code as fol... - 08:20 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- ok,I will test it
- 02:02 AM Bug #21091: StrayManager::truncate is broken
- yes
08/24/2017
- 09:52 PM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- 09:52 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 04:58 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- https://github.com/ceph/ceph/pull/17240
- 09:52 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- 09:52 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 04:51 PM Backport #21101: luminous: FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17238
- 04:48 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 09:51 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- 03:49 AM Bug #21065 (Pending Backport): client: UserPerm delete with supp. groups allocated by malloc gene...
- 09:51 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 04:51 PM Backport #21100: luminous: client: UserPerm delete with supp. groups allocated by malloc generate...
- https://github.com/ceph/ceph/pull/17237
- 04:46 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 09:51 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- 03:49 AM Bug #21078 (Pending Backport): df hangs in ceph-fuse
- 09:51 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 04:45 PM Backport #21099: luminous: client: df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17236
- 04:40 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 09:51 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- 03:50 AM Bug #21082 (Pending Backport): client: the client_lock is not taken for Client::getcwd
- 09:50 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 04:43 PM Backport #21098: luminous: client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17235
- 04:37 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 07:05 PM Bug #21091: StrayManager::truncate is broken
- This only affects deletions of snapshotted files right?
- 09:10 AM Bug #21091 (Fix Under Review): StrayManager::truncate is broken
- https://github.com/ceph/ceph/pull/17219
- 08:56 AM Bug #21091 (Resolved): StrayManager::truncate is broken
- 05:23 PM Backport #21114 (Resolved): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
- https://github.com/ceph/ceph/pull/17474
- 05:23 PM Backport #21113 (Resolved): jewel: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17396
- 05:23 PM Backport #21112 (Resolved): luminous: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17473
- 05:23 PM Backport #21107 (Resolved): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17471
- 05:22 PM Backport #21103 (Resolved): luminous: client: missing space in some client debug log messages
- https://github.com/ceph/ceph/pull/17469
- 10:49 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- please try the attached patch
- 08:45 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- The address of *dn(mds.0.server dn1-10x600000000000099) is overflowed,
but have not found the reason. - 08:42 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I added some print debugging information, and can be reproduced:
1.The right to print...
08/23/2017
- 08:52 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> My active MDS had committed suicide due to "dne in mds map" (this is happening a lot but I don'... - 08:18 PM Bug #21065 (Fix Under Review): client: UserPerm delete with supp. groups allocated by malloc gene...
- https://github.com/ceph/ceph/pull/17204
- 07:43 PM Bug #21082 (Fix Under Review): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17205
- 03:57 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/blob/db16d50cc56f5221d7bcdb28a29d5e0a456cba94/src/client/Client.cc#L9387-L9425
We als... - 06:13 PM Feature #16016 (Resolved): Populate DamageTable from forward scrub
- 06:12 PM Backport #20294 (Resolved): jewel: Populate DamageTable from forward scrub
- 06:12 PM Feature #18509 (Resolved): MDS: damage reporting by ino number is useless
- 06:12 PM Backport #19679 (Resolved): jewel: MDS: damage reporting by ino number is useless
- 06:10 PM Bug #19291 (Resolved): mds: log rotation doesn't work if mds has respawned
- 06:10 PM Backport #19466 (Resolved): jewel: mds: log rotation doesn't work if mds has respawned
- 05:57 PM Cleanup #21069 (Pending Backport): client: missing space in some client debug log messages
- 02:51 AM Cleanup #21069: client: missing space in some client debug log messages
- *PR*: https://github.com/ceph/ceph/pull/17175
- 02:45 AM Cleanup #21069 (Resolved): client: missing space in some client debug log messages
- 2017-08-11 19:05:17.344361 7fb87b1eb700 20 client.15557 may_delete0x10000000522.head(faked_ino=0 ref=3 ll_ref=0 cap_r...
- 04:52 PM Bug #21064 (Pending Backport): FSCommands: missing wait for osdmap writeable + propose
- 04:06 PM Bug #21083 (New): client: clean up header to isolate real public methods and entry points for cli...
- With the recent revelation that the client_lock was not locked for Client::getcwd [1] and other history of missing lo...
- 02:10 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- 02:08 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- https://github.com/ceph/ceph/pull/17163
- 02:08 PM Bug #20945 (Pending Backport): get_quota_root sends lookupname op for every buffered write
- 02:06 PM Bug #21004 (Pending Backport): fs: client/mds has wrong check to clear S_ISGID on chown
- 01:55 PM Bug #21078 (Fix Under Review): df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17199
- 01:49 PM Bug #21078: df hangs in ceph-fuse
- yep. mon says:...
- 01:48 PM Bug #21078: df hangs in ceph-fuse
- Loops like this:...
- 01:42 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- See "[ceph-users] ceph-fuse hanging on df with ceph luminous >= 12.1.3".
The filesystem works normally, except for... - 01:51 PM Bug #20892 (Pending Backport): qa: FS_DEGRADED spurious health warnings in some sub-suites
- 11:12 AM Backport #21067: jewel: MDS integer overflow fix
- OK, backport staged (see description)
- 11:11 AM Backport #21067 (In Progress): jewel: MDS integer overflow fix
- 11:09 AM Backport #21067: jewel: MDS integer overflow fix
- h3. description
Please backport commit 0d74334332fb70212fc71f1130e886952920038d (mds: use client_t instead of int ... - 06:47 AM Bug #19755 (Resolved): MDS became unresponsive when truncating a very large file
- 06:42 AM Backport #20025 (Resolved): jewel: MDS became unresponsive when truncating a very large file
- 04:21 AM Bug #21071: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_...
- Doug, please take this one.
- 04:20 AM Bug #21071 (Resolved): qa: test_misc creates metadata pool with dummy object resulting in WRN: PO...
- ...
- 03:32 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I open the log to try to view the problem where the log information is as follows:...
- 03:06 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
- We plan to use mdtest to create 100w level of the file, in the ceph-fuse mount the directory, the command is as follo...
08/22/2017
- 10:02 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
- https://github.com/ceph/ceph/pull/17188
- 09:44 PM Bug #21066 (New): qa: racy test_export_pin check for export_targets
- ...
- 08:59 PM Bug #21065: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error
- We'll need to convert the UserPerm constructor and such to use malloc/free. ceph_userperm_new can be called from C co...
- 08:30 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- ...
- 07:13 PM Bug #21064 (Fix Under Review): FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17163
- 07:09 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- ...
- 04:04 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan Cutler wrote:
> Patrick, do you mean that the following three PRs should be backported in a single PR targeti... - 07:09 AM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Patrick, do you mean that the following three PRs should be backported in a single PR targeting luminous?
* https:... - 12:36 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Note: the failure is transient (occurred in 2 out of 5 runs so far).
- 11:37 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Hello CephFS developers, I am reproducing this bug in the latest jewel integration branch. Here are the prime suspect...
08/21/2017
- 09:41 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan, the fix for http://tracker.ceph.com/issues/21027 should also make it into Luminous with this backport. I'm go...
- 04:13 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 08:53 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
- The MDS has various file permission/type bits pulled from UNIX headers. These could be different depending on what sy...
- 07:46 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Moving this to main "Ceph" project as it looks more like a problem in the AdminSocket code. The thing seems to mainly...
- 06:13 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Here's a testcase that seems to trigger it fairly reliably. You may have to run it a few times to get it to crash but...
- 05:03 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Correct. I'll see if I can roll up a testcase for this when I get a few mins.
- 04:50 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Jeff, just confirming this bug is with two client instances and not one instance with two threads?
- 02:59 PM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Patch merged: https://github.com/ceph/ceph/pull/17044
- 01:49 PM Feature #18490: client: implement delegation support in userland cephfs
- The latest set has timeout support that basically does a client->unmount() on the thing. With the patches for this bu...
- 11:01 AM Feature #18490: client: implement delegation support in userland cephfs
- For the clean-ish shutdown case, it would be neat to have a common code path with the -EBLACKLISTED handling (see Cli...
- 01:43 PM Bug #21025: racy is_mounted() checks in libcephfs
- PR is here:
https://github.com/ceph/ceph/pull/17095 - 01:40 PM Bug #21004 (Fix Under Review): fs: client/mds has wrong check to clear S_ISGID on chown
- 11:41 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Reporting in, I've had the first incident after the version upgrade.
My active MDS had committed suicide due to "d... - 09:03 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
- kcephfs suite has similar issue:
http://pulpito.ceph.com/teuthology-2017-08-19_05:20:01-kcephfs-luminous-testing-bas...
08/17/2017
- 06:41 PM Feature #19109 (Resolved): Use data pool's 'df' for statfs instead of global stats, if there is o...
- Oh, oops. I forgot I merged this into luminous. Thanks Doug.
- 06:22 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- There's no need to wait for the kernel client since the message encoding is versioned. This has already been merged i...
- 06:14 PM Feature #19109 (Pending Backport): Use data pool's 'df' for statfs instead of global stats, if th...
- Waiting for
https://github.com/ceph/ceph-client/commit/b7f94d6a95dfe2399476de1e0d0a7c15c01611d0
to be merged up... - 03:15 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
- libcephfs.cc has a bunch of is_mounted checks like this in it:...
- 03:02 PM Feature #18490: client: implement delegation support in userland cephfs
- Patrick Donnelly wrote:
> here "client" means Ganesha. What about how does Ganesha handle its client not releasing...
08/16/2017
- 11:08 PM Feature #18490: client: implement delegation support in userland cephfs
- Jeff Layton wrote:
> The main work to be done at this point is handling clients that don't return the delegation in ... - 12:57 PM Feature #18490: client: implement delegation support in userland cephfs
- I've been working on this for the last week or so, so this is a good place to pause and provide an update:
I have ... - 09:47 PM Bug #20990 (Pending Backport): mds,mgr: add 'is_valid=false' when failed to parse caps
- 06:48 PM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
- I think we want just the inital log message for each ll_ operation and not the helpers (e.g. _rmdir).
See: http://... - 06:23 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17053
- 02:58 AM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- The logical is from kernel_src/fs/attr.c...
- 09:54 AM Bug #21007 (Fix Under Review): The ceph fs set mds_max command must be udpated
- Created this PR: https://github.com/ceph/ceph/pull/17044
- 08:14 AM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Copied from Bugzilla:
Ramakrishnan Periyasamy 2017-08-16 09:14:21 CEST
Description of problem:
Upstream docume... - 01:20 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
08/15/2017
- 09:58 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- Well actually the test above fails because the chown was a no-op due to an earlier chown failure. In any case I've fo...
- 09:41 PM Bug #21004 (In Progress): fs: client/mds has wrong check to clear S_ISGID on chown
- 09:41 PM Bug #21004 (Resolved): fs: client/mds has wrong check to clear S_ISGID on chown
- Reported in: https://bugzilla.redhat.com/show_bug.cgi?id=1480182
This causes the failure in test 88 from https://b... - 09:12 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- please close it
no err - 08:59 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- ...
- 08:58 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
- err:...
08/13/2017
- 03:28 AM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- mds,mgr: add 'is_valid=false' when failed to parse caps.
Backport needed for the PRs:
https://github.com/ceph/cep...
08/12/2017
- 01:44 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
- I have a testcase that I'm working on that has two threads, each with their own ceph_mount_info. If those threads end...
08/11/2017
08/10/2017
- 02:27 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- Ahh yeah, I remember seeing that in there a while back. I guess the danger is that we can end up instantiating an ino...
- 02:29 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- what worry me is comment in fuse_lowlevel.h...
- 12:27 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Thanks, Dan! Jewel backport staged: https://github.com/ceph/ceph/pull/16963
- 12:26 PM Backport #20972 (In Progress): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log:...
- 12:25 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- h3. description
10.2.9 instroduces a regression where ceph-fuse will segfault at mount time because of an attempt ... - 11:52 AM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Confirmed that 10.2.9 plus cbf18b1d80d214e4203e88637acf4b0a0a201ee7 does not segfault.
- 09:04 AM Backport #20972 (Resolved): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- https://github.com/ceph/ceph/pull/16963
- 12:24 PM Bug #18157 (Pending Backport): ceph-fuse segfaults on daemonize
- 09:42 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Could you also please add the luminous backport tag for this?
- 09:23 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/16959
- 02:08 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- lookupname is for following case:
directory /a /b have non-default quota
client A is writing /a/file
client ...
08/09/2017
- 04:39 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- This seems to work...
- 10:45 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Thanks. You're right. Here's the trivial reproducer:...
- 08:46 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- enabling quota and writing to unlinked file can produce this easily. get_quota_root() uses dentry in dn_set if it has...
- 01:57 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- FUSE is the only caller of ->ll_lookup so a simpler fix might be to just change the mask field to 0 in the _lookup ca...
- 10:39 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- this slowness is due to limitation of fuse API. The attached patch is a workaround. (not 100% sure it doesn't break a...
08/08/2017
- 05:35 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- Partially resolved by: https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d
- 02:25 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I'm only running the fuse client. I see the problem both on Jewel (10.2.9 servers + fuse client) and on Luminous RC ...
- 02:22 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I tried on latest Luminous RC + 4.12 kernel client. I got about 7000 opens/second in two nodes read-write case.
did ... - 02:00 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Our user confirmed that without client-quota their job finishes quickly:...
- 01:46 PM Bug #20945 (Resolved): get_quota_root sends lookupname op for every buffered write
- We have a CAD use-case (hspice) which sees very slow buffered writes, apparently due to the quota code. (We haven't y...
08/07/2017
- 05:15 PM Feature #20885: add syntax for generating OSD/MDS auth caps for cephfs
- PR to master was https://github.com/ceph/ceph/pull/16761
- 03:33 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
- When accessing the same file opened for read/write on multiple nodes via ceph-fuse, performance drops by about 3 orde...
Also available in: Atom