Activity
From 08/13/2017 to 09/11/2017
09/11/2017
- 10:16 PM Bug #21311: ceph perf dump should report standby MDSes
- This is a collectd thing, which isn't to say that we shouldn't care, but... I'm not sure bugs against collectd really...
- 05:33 PM Bug #21311: ceph perf dump should report standby MDSes
- Doug, please take this one.
- 08:45 PM Bug #20945 (Resolved): get_quota_root sends lookupname op for every buffered write
- 08:44 PM Backport #21112 (Resolved): luminous: get_quota_root sends lookupname op for every buffered write
- 08:02 PM Backport #21359 (Resolved): luminous: racy is_mounted() checks in libcephfs
- https://github.com/ceph/ceph/pull/17875
- 07:33 PM Bug #21337: luminous: MDS is not getting past up:replay on Luminous cluster
- The log file with *debug_mds=10* from MDS startup to reaching the assert is 110GB. I am attaching the last 50K lines...
- 08:48 AM Bug #21337: luminous: MDS is not getting past up:replay on Luminous cluster
- please set debug_mds=10, restart mds and upload the full log. To recover the situation, just replace the 'assert(in)'...
- 06:43 AM Bug #21337 (Resolved): luminous: MDS is not getting past up:replay on Luminous cluster
- On my Luminous 12.2.0 test cluster, after I have run for the last few days, the MDS process is not getting past up:re...
- 05:46 PM Backport #21357: luminous: mds: segfault during `rm -rf` of large directory
- Zheng, please take a look.
- 05:45 PM Backport #21357 (Resolved): luminous: mds: segfault during `rm -rf` of large directory
- https://github.com/ceph/ceph/pull/17686
- 05:28 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- I spent a couple of hours today crawling over the code in ganesha and ceph that handles the CephContext. We have rout...
- 05:22 PM Bug #21025 (Pending Backport): racy is_mounted() checks in libcephfs
- 05:07 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
- PR is merged.
- 01:58 PM Bug #21275 (Fix Under Review): test hang after mds evicts kclient
- Patch is on ceph-devel.
09/08/2017
- 08:20 PM Backport #21324 (Resolved): luminous: ceph: tell mds.* results in warning
- https://github.com/ceph/ceph/pull/17729
- 08:20 PM Backport #21323 (Resolved): luminous: MDCache::try_subtree_merge() may print N^2 lines of debug m...
- https://github.com/ceph/ceph/pull/17712
- 08:20 PM Backport #21322 (Resolved): luminous: MDS: standby-replay mds should avoid initiating subtree export
- https://github.com/ceph/ceph/pull/17714
- 08:20 PM Backport #21321 (Resolved): luminous: mds: asok command error merged with partial Formatter output
- https://github.com/ceph/ceph/pull/17870
- 06:26 PM Bug #21191 (Pending Backport): ceph: tell mds.* results in warning
- 06:26 PM Bug #21222 (Pending Backport): MDS: standby-replay mds should avoid initiating subtree export
- 06:26 PM Bug #21221 (Pending Backport): MDCache::try_subtree_merge() may print N^2 lines of debug message
- 06:25 PM Bug #21252 (Pending Backport): mds: asok command error merged with partial Formatter output
- 05:14 PM Cleanup #21069 (Resolved): client: missing space in some client debug log messages
- 05:14 PM Backport #21103 (Resolved): luminous: client: missing space in some client debug log messages
- 03:51 PM Backport #21103: luminous: client: missing space in some client debug log messages
- https://github.com/ceph/ceph/pull/17469 merged
- 03:10 PM Bug #21311 (Rejected): ceph perf dump should report standby MDSes
- This was discovered when observing the cephmetrics dashboard monitoring the Sepia cluster....
- 01:52 PM Bug #21275: test hang after mds evicts kclient
- Got it. I think we've hit problems like that in NFS, and what we had to do is save copies of the fields from utsname(...
- 01:43 PM Bug #21275: test hang after mds evicts kclient
- ...
- 06:14 AM Bug #21304 (Can't reproduce): mds v12.2.0 crashing
luminous mds crashes few times a day. large activity (eg untaring kernel tarball) causes to crash it in few minutes...
09/07/2017
- 01:52 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- cc'ing Matt on this bug, as it may have implications for the new code that can fetch config info out of RADOS:
Bas... - 01:10 PM Backport #21267 (In Progress): luminous: Incorrect grammar in FS message "1 filesystem is have a ...
- 01:09 PM Backport #21278 (In Progress): luminous: the standbys are not updated via "ceph tell mds.* command"
- 07:35 AM Backport #21278 (Resolved): luminous: the standbys are not updated via "ceph tell mds.* command"
- https://github.com/ceph/ceph/pull/17565
- 08:27 AM Bug #21274 (Fix Under Review): Client: if request gets aborted, its reference leaks
- https://github.com/ceph/ceph/pull/17545
- 01:48 AM Bug #21274 (Resolved): Client: if request gets aborted, its reference leaks
- /a/pdonnell-2017-09-06_15:30:20-fs-wip-pdonnell-testing-20170906-distro-basic-smithi/1601384/teuthology.log
log of... - 07:47 AM Backport #21113 (Resolved): jewel: get_quota_root sends lookupname op for every buffered write
- 07:43 AM Bug #18157 (Resolved): ceph-fuse segfaults on daemonize
- 07:43 AM Backport #20972 (Resolved): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- 05:44 AM Bug #21275 (Resolved): test hang after mds evicts kclient
- http://pulpito.ceph.com/zyan-2017-09-07_03:18:23-kcephfs-master-testing-basic-mira/
http://qa-proxy.ceph.com/teuth... - 01:22 AM Bug #21230 (Pending Backport): the standbys are not updated via "ceph tell mds.* command"
09/06/2017
- 07:39 PM Backport #21267 (Resolved): luminous: Incorrect grammar in FS message "1 filesystem is have a fai...
- https://github.com/ceph/ceph/pull/17566
- 09:05 AM Bug #21252: mds: asok command error merged with partial Formatter output
- Sorry, the bug was introduced by my commit:...
- 03:52 AM Bug #21153 (Pending Backport): Incorrect grammar in FS message "1 filesystem is have a failed mds...
- 03:51 AM Bug #20337 (Resolved): test_rebuild_simple_altpool triggers MDS assertion
09/05/2017
- 09:48 PM Bug #21252: mds: asok command error merged with partial Formatter output
- I should note: the error itself is very concerning because the only way for dump_cache to fail is if it's operating o...
- 09:46 PM Bug #21252 (Fix Under Review): mds: asok command error merged with partial Formatter output
- https://github.com/ceph/ceph/pull/17506
- 08:20 PM Bug #21252 (Resolved): mds: asok command error merged with partial Formatter output
- ...
- 09:35 PM Bug #21222 (Fix Under Review): MDS: standby-replay mds should avoid initiating subtree export
- 03:39 PM Bug #16709 (Resolved): No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
- 03:38 PM Bug #18660 (Resolved): fragment space check can cause replayed request fail
- 03:38 PM Bug #18661 (Resolved): Test failure: test_open_inode
- 03:38 PM Bug #18877 (Resolved): mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stra...
- 03:38 PM Bug #18941 (Resolved): buffer overflow in test LibCephFS.DirLs
- 03:38 PM Bug #19118 (Resolved): MDS heartbeat timeout during rejoin, when working with large amount of cap...
- 03:37 PM Bug #19406 (Resolved): MDS server crashes due to inconsistent metadata.
- 03:32 PM Bug #19955 (Resolved): Too many stat ops when MDS trying to probe a large file
- 03:32 PM Backport #20149 (Rejected): kraken: Too many stat ops when MDS trying to probe a large file
- Kraken is EOL.
- 03:32 PM Bug #20055 (Resolved): Journaler may execute on_safe contexts prematurely
- 03:31 PM Backport #20141 (Rejected): kraken: Journaler may execute on_safe contexts prematurely
- Kraken is EOL.Kraken is EOL.
- 11:07 AM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Hmm. I'm not sure that really helps. Here's the doc comment over ceph_create_with_context:...
- 09:46 AM Bug #20988: client: dual client segfault with racing ceph_shutdown
- I found a workaround. We can create single CephContext for multiple ceph_mount....
- 09:35 AM Backport #21114 (In Progress): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
- 09:33 AM Backport #21112 (In Progress): luminous: get_quota_root sends lookupname op for every buffered write
- 09:28 AM Backport #21107 (In Progress): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
- 09:22 AM Backport #21103 (In Progress): luminous: client: missing space in some client debug log messages
- 08:45 AM Bug #21193 (Duplicate): ceph.in: `ceph tell mds.* injectargs` does not update standbys
- http://tracker.ceph.com/issues/21230
- 08:40 AM Bug #21230 (Fix Under Review): the standbys are not updated via "ceph tell mds.* command"
- https://github.com/ceph/ceph/pull/17463
- 07:59 AM Bug #21230 (Resolved): the standbys are not updated via "ceph tell mds.* command"
09/04/2017
- 07:11 PM Bug #20178 (Resolved): df reports negative disk "used" value when quota exceed
- 07:11 PM Backport #20350 (Rejected): kraken: df reports negative disk "used" value when quota exceed
- Kraken is EOL.
- 07:11 PM Backport #20349 (Resolved): jewel: df reports negative disk "used" value when quota exceed
- 07:10 PM Bug #20340 (Resolved): cephfs permission denied until second client accesses file
- 07:10 PM Backport #20404 (Rejected): kraken: cephfs permission denied until second client accesses file
- Kraken is EOL.
- 07:10 PM Backport #20403 (Resolved): jewel: cephfs permission denied until second client accesses file
- 06:43 PM Bug #21221 (Fix Under Review): MDCache::try_subtree_merge() may print N^2 lines of debug message
- https://github.com/ceph/ceph/pull/17456
- 09:11 AM Bug #21221 (Resolved): MDCache::try_subtree_merge() may print N^2 lines of debug message
- MDCache::try_subtree_merge(dirfrag) calls MDCache::try_subtree_merge_at() for each subtree in the dirfrag. try_subtre...
- 11:14 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
- Here is a merge request for this bug fix: https://github.com/ceph/ceph/pull/17452, could you have a review? @Patrick
- 11:11 AM Bug #21222: MDS: standby-replay mds should avoid initiating subtree export
- Although for the latest code in master branch, this issue could be avoided by the destination check in export_dir:
... - 10:24 AM Bug #21222 (Resolved): MDS: standby-replay mds should avoid initiating subtree export
- For jewel-10.2.7 version, use two active mds and two related standby-replay mds.
When standby-replay replays the m...
08/31/2017
- 01:06 PM Feature #18490: client: implement delegation support in userland cephfs
- Here's a capture showing the delegation grant and recall (what can I say, I'm a proud parent). The delegation was rev...
- 12:47 PM Feature #18490: client: implement delegation support in userland cephfs
- I was able to get ganesha to hand out a v4.0 delegation today and recall it properly. So, PoC is successful!
There s... - 11:12 AM Backport #21113 (In Progress): jewel: get_quota_root sends lookupname op for every buffered write
08/30/2017
- 11:17 PM Bug #21193 (Duplicate): ceph.in: `ceph tell mds.* injectargs` does not update standbys
- ...
- 10:54 PM Bug #21191 (Fix Under Review): ceph: tell mds.* results in warning
- https://github.com/ceph/ceph/pull/17384
- 10:19 PM Bug #21191: ceph: tell mds.* results in warning
- John believes 9753a0065db8bfb03d86a7185bc636c7aa4c7af7 may be the cause.
- 10:15 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
- ...
08/29/2017
- 03:31 PM Documentation #21172 (Duplicate): doc: Export over NFS
- Create a document similar to RGW's NFS support, https://github.com/ceph/ceph/blob/master/doc/radosgw/nfs.rst
to help... - 10:08 AM Bug #21168 (Fix Under Review): cap import/export message ordering issue
- https://github.com/ceph/ceph/pull/17340
- 09:59 AM Bug #21168 (Resolved): cap import/export message ordering issue
- there are cap import/export message ordering issue
symptoms are:
kernel prints error "handle_cap_import: mismat...
08/28/2017
- 05:54 PM Feature #18490: client: implement delegation support in userland cephfs
- I made some progress today. I got ganesha over ceph to hand out a read delegation. Once I tried to force a recall (by...
- 01:51 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
- opening inode during rejoin stage is slow when clients have large number of caps.
Currently mds journal open inode... - 01:42 PM Bug #21083: client: clean up header to isolate real public methods and entry points for client_lock
- Should be reorganized with an eye toward finer grained locks, along with a client_lock audit. -Jeff
- 01:36 PM Bug #21058: mds: remove UNIX file permissions binary dependency
- May not be necessary as bits are defined by POSIX. Should still look for other dependencies which may vary.
- 12:54 PM Bug #21153 (Fix Under Review): Incorrect grammar in FS message "1 filesystem is have a failed mds...
- https://github.com/ceph/ceph/pull/17301
- 12:52 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
- 06:53 AM Bug #21149: SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
- I think the exception was triggered by writing the debug message before reading ceph config.
*PR* https://github.com... - 06:49 AM Bug #21149 (Rejected): SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())
- When I run the Hadoop write test, the following exception occurs(NOT 100%):
/clove/vm/renhw/ceph/rpmbuild/BUILD/ce... - 03:34 AM Bug #21070 (Fix Under Review): MDS: MDS is laggy or crashed When deleting a large number of files
- https://github.com/ceph/ceph/pull/17291
- 03:17 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- for seeky readdir on large directory
int fd = open("/mnt/ceph", O_RDONLY | O_DIRECTORY)
lseek(fd, xxxxxx, SEEK_SE...
08/26/2017
- 02:29 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- your patch can solve this problem, I test twice today.:-)
my modification is just to verify the dirfrag offset cau...
08/25/2017
- 09:29 PM Bug #20535 (Resolved): mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert, the backport is merged so I'm marking this as resolved. If you experience this particular issue again, please...
- 09:27 PM Backport #20564 (Resolved): jewel: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- 09:22 PM Bug #20596 (Fix Under Review): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
- https://github.com/ceph/ceph/pull/17266
- 01:33 PM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- my patch alone does not work? your change will make seeky readdir on directory inefficiency
- 08:28 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- In addition, I set "offset_hash" to 0 when offset_str is empty, which can solve this problem.
Modify the code as fol... - 08:20 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- ok,I will test it
- 02:02 AM Bug #21091: StrayManager::truncate is broken
- yes
08/24/2017
- 09:52 PM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- 09:52 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 04:58 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- https://github.com/ceph/ceph/pull/17240
- 09:52 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- 09:52 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 04:51 PM Backport #21101: luminous: FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17238
- 04:48 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 09:51 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- 03:49 AM Bug #21065 (Pending Backport): client: UserPerm delete with supp. groups allocated by malloc gene...
- 09:51 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 04:51 PM Backport #21100: luminous: client: UserPerm delete with supp. groups allocated by malloc generate...
- https://github.com/ceph/ceph/pull/17237
- 04:46 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 09:51 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- 03:49 AM Bug #21078 (Pending Backport): df hangs in ceph-fuse
- 09:51 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 04:45 PM Backport #21099: luminous: client: df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17236
- 04:40 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 09:51 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- 03:50 AM Bug #21082 (Pending Backport): client: the client_lock is not taken for Client::getcwd
- 09:50 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 04:43 PM Backport #21098: luminous: client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17235
- 04:37 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 07:05 PM Bug #21091: StrayManager::truncate is broken
- This only affects deletions of snapshotted files right?
- 09:10 AM Bug #21091 (Fix Under Review): StrayManager::truncate is broken
- https://github.com/ceph/ceph/pull/17219
- 08:56 AM Bug #21091 (Resolved): StrayManager::truncate is broken
- 05:23 PM Backport #21114 (Resolved): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
- https://github.com/ceph/ceph/pull/17474
- 05:23 PM Backport #21113 (Resolved): jewel: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17396
- 05:23 PM Backport #21112 (Resolved): luminous: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17473
- 05:23 PM Backport #21107 (Resolved): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17471
- 05:22 PM Backport #21103 (Resolved): luminous: client: missing space in some client debug log messages
- https://github.com/ceph/ceph/pull/17469
- 10:49 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- please try the attached patch
- 08:45 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- The address of *dn(mds.0.server dn1-10x600000000000099) is overflowed,
but have not found the reason. - 08:42 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I added some print debugging information, and can be reproduced:
1.The right to print...
08/23/2017
- 08:52 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> My active MDS had committed suicide due to "dne in mds map" (this is happening a lot but I don'... - 08:18 PM Bug #21065 (Fix Under Review): client: UserPerm delete with supp. groups allocated by malloc gene...
- https://github.com/ceph/ceph/pull/17204
- 07:43 PM Bug #21082 (Fix Under Review): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17205
- 03:57 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/blob/db16d50cc56f5221d7bcdb28a29d5e0a456cba94/src/client/Client.cc#L9387-L9425
We als... - 06:13 PM Feature #16016 (Resolved): Populate DamageTable from forward scrub
- 06:12 PM Backport #20294 (Resolved): jewel: Populate DamageTable from forward scrub
- 06:12 PM Feature #18509 (Resolved): MDS: damage reporting by ino number is useless
- 06:12 PM Backport #19679 (Resolved): jewel: MDS: damage reporting by ino number is useless
- 06:10 PM Bug #19291 (Resolved): mds: log rotation doesn't work if mds has respawned
- 06:10 PM Backport #19466 (Resolved): jewel: mds: log rotation doesn't work if mds has respawned
- 05:57 PM Cleanup #21069 (Pending Backport): client: missing space in some client debug log messages
- 02:51 AM Cleanup #21069: client: missing space in some client debug log messages
- *PR*: https://github.com/ceph/ceph/pull/17175
- 02:45 AM Cleanup #21069 (Resolved): client: missing space in some client debug log messages
- 2017-08-11 19:05:17.344361 7fb87b1eb700 20 client.15557 may_delete0x10000000522.head(faked_ino=0 ref=3 ll_ref=0 cap_r...
- 04:52 PM Bug #21064 (Pending Backport): FSCommands: missing wait for osdmap writeable + propose
- 04:06 PM Bug #21083 (New): client: clean up header to isolate real public methods and entry points for cli...
- With the recent revelation that the client_lock was not locked for Client::getcwd [1] and other history of missing lo...
- 02:10 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- 02:08 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- https://github.com/ceph/ceph/pull/17163
- 02:08 PM Bug #20945 (Pending Backport): get_quota_root sends lookupname op for every buffered write
- 02:06 PM Bug #21004 (Pending Backport): fs: client/mds has wrong check to clear S_ISGID on chown
- 01:55 PM Bug #21078 (Fix Under Review): df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17199
- 01:49 PM Bug #21078: df hangs in ceph-fuse
- yep. mon says:...
- 01:48 PM Bug #21078: df hangs in ceph-fuse
- Loops like this:...
- 01:42 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- See "[ceph-users] ceph-fuse hanging on df with ceph luminous >= 12.1.3".
The filesystem works normally, except for... - 01:51 PM Bug #20892 (Pending Backport): qa: FS_DEGRADED spurious health warnings in some sub-suites
- 11:12 AM Backport #21067: jewel: MDS integer overflow fix
- OK, backport staged (see description)
- 11:11 AM Backport #21067 (In Progress): jewel: MDS integer overflow fix
- 11:09 AM Backport #21067: jewel: MDS integer overflow fix
- h3. description
Please backport commit 0d74334332fb70212fc71f1130e886952920038d (mds: use client_t instead of int ... - 06:47 AM Bug #19755 (Resolved): MDS became unresponsive when truncating a very large file
- 06:42 AM Backport #20025 (Resolved): jewel: MDS became unresponsive when truncating a very large file
- 04:21 AM Bug #21071: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_...
- Doug, please take this one.
- 04:20 AM Bug #21071 (Resolved): qa: test_misc creates metadata pool with dummy object resulting in WRN: PO...
- ...
- 03:32 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I open the log to try to view the problem where the log information is as follows:...
- 03:06 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
- We plan to use mdtest to create 100w level of the file, in the ceph-fuse mount the directory, the command is as follo...
08/22/2017
- 10:02 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
- https://github.com/ceph/ceph/pull/17188
- 09:44 PM Bug #21066 (New): qa: racy test_export_pin check for export_targets
- ...
- 08:59 PM Bug #21065: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error
- We'll need to convert the UserPerm constructor and such to use malloc/free. ceph_userperm_new can be called from C co...
- 08:30 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- ...
- 07:13 PM Bug #21064 (Fix Under Review): FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17163
- 07:09 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- ...
- 04:04 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan Cutler wrote:
> Patrick, do you mean that the following three PRs should be backported in a single PR targeti... - 07:09 AM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Patrick, do you mean that the following three PRs should be backported in a single PR targeting luminous?
* https:... - 12:36 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Note: the failure is transient (occurred in 2 out of 5 runs so far).
- 11:37 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Hello CephFS developers, I am reproducing this bug in the latest jewel integration branch. Here are the prime suspect...
08/21/2017
- 09:41 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan, the fix for http://tracker.ceph.com/issues/21027 should also make it into Luminous with this backport. I'm go...
- 04:13 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 08:53 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
- The MDS has various file permission/type bits pulled from UNIX headers. These could be different depending on what sy...
- 07:46 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Moving this to main "Ceph" project as it looks more like a problem in the AdminSocket code. The thing seems to mainly...
- 06:13 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Here's a testcase that seems to trigger it fairly reliably. You may have to run it a few times to get it to crash but...
- 05:03 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Correct. I'll see if I can roll up a testcase for this when I get a few mins.
- 04:50 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Jeff, just confirming this bug is with two client instances and not one instance with two threads?
- 02:59 PM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Patch merged: https://github.com/ceph/ceph/pull/17044
- 01:49 PM Feature #18490: client: implement delegation support in userland cephfs
- The latest set has timeout support that basically does a client->unmount() on the thing. With the patches for this bu...
- 11:01 AM Feature #18490: client: implement delegation support in userland cephfs
- For the clean-ish shutdown case, it would be neat to have a common code path with the -EBLACKLISTED handling (see Cli...
- 01:43 PM Bug #21025: racy is_mounted() checks in libcephfs
- PR is here:
https://github.com/ceph/ceph/pull/17095 - 01:40 PM Bug #21004 (Fix Under Review): fs: client/mds has wrong check to clear S_ISGID on chown
- 11:41 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Reporting in, I've had the first incident after the version upgrade.
My active MDS had committed suicide due to "d... - 09:03 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
- kcephfs suite has similar issue:
http://pulpito.ceph.com/teuthology-2017-08-19_05:20:01-kcephfs-luminous-testing-bas...
08/17/2017
- 06:41 PM Feature #19109 (Resolved): Use data pool's 'df' for statfs instead of global stats, if there is o...
- Oh, oops. I forgot I merged this into luminous. Thanks Doug.
- 06:22 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- There's no need to wait for the kernel client since the message encoding is versioned. This has already been merged i...
- 06:14 PM Feature #19109 (Pending Backport): Use data pool's 'df' for statfs instead of global stats, if th...
- Waiting for
https://github.com/ceph/ceph-client/commit/b7f94d6a95dfe2399476de1e0d0a7c15c01611d0
to be merged up... - 03:15 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
- libcephfs.cc has a bunch of is_mounted checks like this in it:...
- 03:02 PM Feature #18490: client: implement delegation support in userland cephfs
- Patrick Donnelly wrote:
> here "client" means Ganesha. What about how does Ganesha handle its client not releasing...
08/16/2017
- 11:08 PM Feature #18490: client: implement delegation support in userland cephfs
- Jeff Layton wrote:
> The main work to be done at this point is handling clients that don't return the delegation in ... - 12:57 PM Feature #18490: client: implement delegation support in userland cephfs
- I've been working on this for the last week or so, so this is a good place to pause and provide an update:
I have ... - 09:47 PM Bug #20990 (Pending Backport): mds,mgr: add 'is_valid=false' when failed to parse caps
- 06:48 PM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
- I think we want just the inital log message for each ll_ operation and not the helpers (e.g. _rmdir).
See: http://... - 06:23 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17053
- 02:58 AM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- The logical is from kernel_src/fs/attr.c...
- 09:54 AM Bug #21007 (Fix Under Review): The ceph fs set mds_max command must be udpated
- Created this PR: https://github.com/ceph/ceph/pull/17044
- 08:14 AM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Copied from Bugzilla:
Ramakrishnan Periyasamy 2017-08-16 09:14:21 CEST
Description of problem:
Upstream docume... - 01:20 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
08/15/2017
- 09:58 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- Well actually the test above fails because the chown was a no-op due to an earlier chown failure. In any case I've fo...
- 09:41 PM Bug #21004 (In Progress): fs: client/mds has wrong check to clear S_ISGID on chown
- 09:41 PM Bug #21004 (Resolved): fs: client/mds has wrong check to clear S_ISGID on chown
- Reported in: https://bugzilla.redhat.com/show_bug.cgi?id=1480182
This causes the failure in test 88 from https://b... - 09:12 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- please close it
no err - 08:59 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- ...
- 08:58 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
- err:...
08/13/2017
- 03:28 AM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- mds,mgr: add 'is_valid=false' when failed to parse caps.
Backport needed for the PRs:
https://github.com/ceph/cep...
Also available in: Atom