Activity
From 07/28/2017 to 08/26/2017
08/26/2017
- 02:29 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- your patch can solve this problem, I test twice today.:-)
my modification is just to verify the dirfrag offset cau...
08/25/2017
- 09:29 PM Bug #20535 (Resolved): mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert, the backport is merged so I'm marking this as resolved. If you experience this particular issue again, please...
- 09:27 PM Backport #20564 (Resolved): jewel: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- 09:22 PM Bug #20596 (Fix Under Review): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
- https://github.com/ceph/ceph/pull/17266
- 01:33 PM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- my patch alone does not work? your change will make seeky readdir on directory inefficiency
- 08:28 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- In addition, I set "offset_hash" to 0 when offset_str is empty, which can solve this problem.
Modify the code as fol... - 08:20 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- ok,I will test it
- 02:02 AM Bug #21091: StrayManager::truncate is broken
- yes
08/24/2017
- 09:52 PM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- 09:52 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 04:58 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- https://github.com/ceph/ceph/pull/17240
- 09:52 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- 09:52 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 04:51 PM Backport #21101: luminous: FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17238
- 04:48 PM Backport #21101 (Resolved): luminous: FSCommands: missing wait for osdmap writeable + propose
- 09:51 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- 03:49 AM Bug #21065 (Pending Backport): client: UserPerm delete with supp. groups allocated by malloc gene...
- 09:51 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 04:51 PM Backport #21100: luminous: client: UserPerm delete with supp. groups allocated by malloc generate...
- https://github.com/ceph/ceph/pull/17237
- 04:46 PM Backport #21100 (Resolved): luminous: client: UserPerm delete with supp. groups allocated by mall...
- 09:51 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- 03:49 AM Bug #21078 (Pending Backport): df hangs in ceph-fuse
- 09:51 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 04:45 PM Backport #21099: luminous: client: df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17236
- 04:40 PM Backport #21099 (Resolved): luminous: client: df hangs in ceph-fuse
- 09:51 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- 03:50 AM Bug #21082 (Pending Backport): client: the client_lock is not taken for Client::getcwd
- 09:50 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 04:43 PM Backport #21098: luminous: client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17235
- 04:37 PM Backport #21098 (Resolved): luminous: client: the client_lock is not taken for Client::getcwd
- 07:05 PM Bug #21091: StrayManager::truncate is broken
- This only affects deletions of snapshotted files right?
- 09:10 AM Bug #21091 (Fix Under Review): StrayManager::truncate is broken
- https://github.com/ceph/ceph/pull/17219
- 08:56 AM Bug #21091 (Resolved): StrayManager::truncate is broken
- 05:23 PM Backport #21114 (Resolved): luminous: qa: FS_DEGRADED spurious health warnings in some sub-suites
- https://github.com/ceph/ceph/pull/17474
- 05:23 PM Backport #21113 (Resolved): jewel: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17396
- 05:23 PM Backport #21112 (Resolved): luminous: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/17473
- 05:23 PM Backport #21107 (Resolved): luminous: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17471
- 05:22 PM Backport #21103 (Resolved): luminous: client: missing space in some client debug log messages
- https://github.com/ceph/ceph/pull/17469
- 10:49 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- please try the attached patch
- 08:45 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- The address of *dn(mds.0.server dn1-10x600000000000099) is overflowed,
but have not found the reason. - 08:42 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I added some print debugging information, and can be reproduced:
1.The right to print...
08/23/2017
- 08:52 PM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> My active MDS had committed suicide due to "dne in mds map" (this is happening a lot but I don'... - 08:18 PM Bug #21065 (Fix Under Review): client: UserPerm delete with supp. groups allocated by malloc gene...
- https://github.com/ceph/ceph/pull/17204
- 07:43 PM Bug #21082 (Fix Under Review): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/pull/17205
- 03:57 PM Bug #21082 (Resolved): client: the client_lock is not taken for Client::getcwd
- https://github.com/ceph/ceph/blob/db16d50cc56f5221d7bcdb28a29d5e0a456cba94/src/client/Client.cc#L9387-L9425
We als... - 06:13 PM Feature #16016 (Resolved): Populate DamageTable from forward scrub
- 06:12 PM Backport #20294 (Resolved): jewel: Populate DamageTable from forward scrub
- 06:12 PM Feature #18509 (Resolved): MDS: damage reporting by ino number is useless
- 06:12 PM Backport #19679 (Resolved): jewel: MDS: damage reporting by ino number is useless
- 06:10 PM Bug #19291 (Resolved): mds: log rotation doesn't work if mds has respawned
- 06:10 PM Backport #19466 (Resolved): jewel: mds: log rotation doesn't work if mds has respawned
- 05:57 PM Cleanup #21069 (Pending Backport): client: missing space in some client debug log messages
- 02:51 AM Cleanup #21069: client: missing space in some client debug log messages
- *PR*: https://github.com/ceph/ceph/pull/17175
- 02:45 AM Cleanup #21069 (Resolved): client: missing space in some client debug log messages
- 2017-08-11 19:05:17.344361 7fb87b1eb700 20 client.15557 may_delete0x10000000522.head(faked_ino=0 ref=3 ll_ref=0 cap_r...
- 04:52 PM Bug #21064 (Pending Backport): FSCommands: missing wait for osdmap writeable + propose
- 04:06 PM Bug #21083 (New): client: clean up header to isolate real public methods and entry points for cli...
- With the recent revelation that the client_lock was not locked for Client::getcwd [1] and other history of missing lo...
- 02:10 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- 02:08 PM Bug #21081 (Duplicate): mon: get writeable osdmap for added data pool
- https://github.com/ceph/ceph/pull/17163
- 02:08 PM Bug #20945 (Pending Backport): get_quota_root sends lookupname op for every buffered write
- 02:06 PM Bug #21004 (Pending Backport): fs: client/mds has wrong check to clear S_ISGID on chown
- 01:55 PM Bug #21078 (Fix Under Review): df hangs in ceph-fuse
- https://github.com/ceph/ceph/pull/17199
- 01:49 PM Bug #21078: df hangs in ceph-fuse
- yep. mon says:...
- 01:48 PM Bug #21078: df hangs in ceph-fuse
- Loops like this:...
- 01:42 PM Bug #21078 (Resolved): df hangs in ceph-fuse
- See "[ceph-users] ceph-fuse hanging on df with ceph luminous >= 12.1.3".
The filesystem works normally, except for... - 01:51 PM Bug #20892 (Pending Backport): qa: FS_DEGRADED spurious health warnings in some sub-suites
- 11:12 AM Backport #21067: jewel: MDS integer overflow fix
- OK, backport staged (see description)
- 11:11 AM Backport #21067 (In Progress): jewel: MDS integer overflow fix
- 11:09 AM Backport #21067: jewel: MDS integer overflow fix
- h3. description
Please backport commit 0d74334332fb70212fc71f1130e886952920038d (mds: use client_t instead of int ... - 06:47 AM Bug #19755 (Resolved): MDS became unresponsive when truncating a very large file
- 06:42 AM Backport #20025 (Resolved): jewel: MDS became unresponsive when truncating a very large file
- 04:21 AM Bug #21071: qa: test_misc creates metadata pool with dummy object resulting in WRN: POOL_APP_NOT_...
- Doug, please take this one.
- 04:20 AM Bug #21071 (Resolved): qa: test_misc creates metadata pool with dummy object resulting in WRN: PO...
- ...
- 03:32 AM Bug #21070: MDS: MDS is laggy or crashed When deleting a large number of files
- I open the log to try to view the problem where the log information is as follows:...
- 03:06 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
- We plan to use mdtest to create 100w level of the file, in the ceph-fuse mount the directory, the command is as follo...
08/22/2017
- 10:02 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
- https://github.com/ceph/ceph/pull/17188
- 09:44 PM Bug #21066 (New): qa: racy test_export_pin check for export_targets
- ...
- 08:59 PM Bug #21065: client: UserPerm delete with supp. groups allocated by malloc generates valgrind error
- We'll need to convert the UserPerm constructor and such to use malloc/free. ceph_userperm_new can be called from C co...
- 08:30 PM Bug #21065 (Resolved): client: UserPerm delete with supp. groups allocated by malloc generates va...
- ...
- 07:13 PM Bug #21064 (Fix Under Review): FSCommands: missing wait for osdmap writeable + propose
- https://github.com/ceph/ceph/pull/17163
- 07:09 PM Bug #21064 (Resolved): FSCommands: missing wait for osdmap writeable + propose
- ...
- 04:04 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan Cutler wrote:
> Patrick, do you mean that the following three PRs should be backported in a single PR targeti... - 07:09 AM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Patrick, do you mean that the following three PRs should be backported in a single PR targeting luminous?
* https:... - 12:36 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Note: the failure is transient (occurred in 2 out of 5 runs so far).
- 11:37 AM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Hello CephFS developers, I am reproducing this bug in the latest jewel integration branch. Here are the prime suspect...
08/21/2017
- 09:41 PM Backport #21047: luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- Nathan, the fix for http://tracker.ceph.com/issues/21027 should also make it into Luminous with this backport. I'm go...
- 04:13 PM Backport #21047 (Resolved): luminous: mds,mgr: add 'is_valid=false' when failed to parse caps
- 08:53 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
- The MDS has various file permission/type bits pulled from UNIX headers. These could be different depending on what sy...
- 07:46 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Moving this to main "Ceph" project as it looks more like a problem in the AdminSocket code. The thing seems to mainly...
- 06:13 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Here's a testcase that seems to trigger it fairly reliably. You may have to run it a few times to get it to crash but...
- 05:03 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Correct. I'll see if I can roll up a testcase for this when I get a few mins.
- 04:50 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Jeff, just confirming this bug is with two client instances and not one instance with two threads?
- 02:59 PM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Patch merged: https://github.com/ceph/ceph/pull/17044
- 01:49 PM Feature #18490: client: implement delegation support in userland cephfs
- The latest set has timeout support that basically does a client->unmount() on the thing. With the patches for this bu...
- 11:01 AM Feature #18490: client: implement delegation support in userland cephfs
- For the clean-ish shutdown case, it would be neat to have a common code path with the -EBLACKLISTED handling (see Cli...
- 01:43 PM Bug #21025: racy is_mounted() checks in libcephfs
- PR is here:
https://github.com/ceph/ceph/pull/17095 - 01:40 PM Bug #21004 (Fix Under Review): fs: client/mds has wrong check to clear S_ISGID on chown
- 11:41 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Reporting in, I've had the first incident after the version upgrade.
My active MDS had committed suicide due to "d... - 09:03 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
- kcephfs suite has similar issue:
http://pulpito.ceph.com/teuthology-2017-08-19_05:20:01-kcephfs-luminous-testing-bas...
08/17/2017
- 06:41 PM Feature #19109 (Resolved): Use data pool's 'df' for statfs instead of global stats, if there is o...
- Oh, oops. I forgot I merged this into luminous. Thanks Doug.
- 06:22 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- There's no need to wait for the kernel client since the message encoding is versioned. This has already been merged i...
- 06:14 PM Feature #19109 (Pending Backport): Use data pool's 'df' for statfs instead of global stats, if th...
- Waiting for
https://github.com/ceph/ceph-client/commit/b7f94d6a95dfe2399476de1e0d0a7c15c01611d0
to be merged up... - 03:15 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
- libcephfs.cc has a bunch of is_mounted checks like this in it:...
- 03:02 PM Feature #18490: client: implement delegation support in userland cephfs
- Patrick Donnelly wrote:
> here "client" means Ganesha. What about how does Ganesha handle its client not releasing...
08/16/2017
- 11:08 PM Feature #18490: client: implement delegation support in userland cephfs
- Jeff Layton wrote:
> The main work to be done at this point is handling clients that don't return the delegation in ... - 12:57 PM Feature #18490: client: implement delegation support in userland cephfs
- I've been working on this for the last week or so, so this is a good place to pause and provide an update:
I have ... - 09:47 PM Bug #20990 (Pending Backport): mds,mgr: add 'is_valid=false' when failed to parse caps
- 06:48 PM Bug #21014 (Resolved): fs: reduce number of helper debug messages at level 5 for client
- I think we want just the inital log message for each ll_ operation and not the helpers (e.g. _rmdir).
See: http://... - 06:23 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- https://github.com/ceph/ceph/pull/17053
- 02:58 AM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- The logical is from kernel_src/fs/attr.c...
- 09:54 AM Bug #21007 (Fix Under Review): The ceph fs set mds_max command must be udpated
- Created this PR: https://github.com/ceph/ceph/pull/17044
- 08:14 AM Bug #21007 (Resolved): The ceph fs set mds_max command must be udpated
- Copied from Bugzilla:
Ramakrishnan Periyasamy 2017-08-16 09:14:21 CEST
Description of problem:
Upstream docume... - 01:20 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
08/15/2017
- 09:58 PM Bug #21004: fs: client/mds has wrong check to clear S_ISGID on chown
- Well actually the test above fails because the chown was a no-op due to an earlier chown failure. In any case I've fo...
- 09:41 PM Bug #21004 (In Progress): fs: client/mds has wrong check to clear S_ISGID on chown
- 09:41 PM Bug #21004 (Resolved): fs: client/mds has wrong check to clear S_ISGID on chown
- Reported in: https://bugzilla.redhat.com/show_bug.cgi?id=1480182
This causes the failure in test 88 from https://b... - 09:12 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- please close it
no err - 08:59 AM Bug #21002: set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" fail
- ...
- 08:58 AM Bug #21002 (Closed): set option "mon_pool_quota_warn_threshold && mon_pool_quota_crit_threshold" ...
- err:...
08/13/2017
- 03:28 AM Bug #20990 (Resolved): mds,mgr: add 'is_valid=false' when failed to parse caps
- mds,mgr: add 'is_valid=false' when failed to parse caps.
Backport needed for the PRs:
https://github.com/ceph/cep...
08/12/2017
- 01:44 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
- I have a testcase that I'm working on that has two threads, each with their own ceph_mount_info. If those threads end...
08/11/2017
08/10/2017
- 02:27 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- Ahh yeah, I remember seeing that in there a while back. I guess the danger is that we can end up instantiating an ino...
- 02:29 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- what worry me is comment in fuse_lowlevel.h...
- 12:27 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Thanks, Dan! Jewel backport staged: https://github.com/ceph/ceph/pull/16963
- 12:26 PM Backport #20972 (In Progress): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log:...
- 12:25 PM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- h3. description
10.2.9 instroduces a regression where ceph-fuse will segfault at mount time because of an attempt ... - 11:52 AM Backport #20972: jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- Confirmed that 10.2.9 plus cbf18b1d80d214e4203e88637acf4b0a0a201ee7 does not segfault.
- 09:04 AM Backport #20972 (Resolved): jewel ceph-fuse segfaults at mount time, assert in ceph::log::Log::stop
- https://github.com/ceph/ceph/pull/16963
- 12:24 PM Bug #18157 (Pending Backport): ceph-fuse segfaults on daemonize
- 09:42 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Could you also please add the luminous backport tag for this?
- 09:23 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- https://github.com/ceph/ceph/pull/16959
- 02:08 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- lookupname is for following case:
directory /a /b have non-default quota
client A is writing /a/file
client ...
08/09/2017
- 04:39 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- This seems to work...
- 10:45 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Thanks. You're right. Here's the trivial reproducer:...
- 08:46 AM Bug #20945: get_quota_root sends lookupname op for every buffered write
- enabling quota and writing to unlinked file can produce this easily. get_quota_root() uses dentry in dn_set if it has...
- 01:57 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- FUSE is the only caller of ->ll_lookup so a simpler fix might be to just change the mask field to 0 in the _lookup ca...
- 10:39 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- this slowness is due to limitation of fuse API. The attached patch is a workaround. (not 100% sure it doesn't break a...
08/08/2017
- 05:35 PM Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one dat...
- Partially resolved by: https://github.com/ceph/ceph/commit/eabe6626141df3f1b253c880aa6cb852c8b7ac1d
- 02:25 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I'm only running the fuse client. I see the problem both on Jewel (10.2.9 servers + fuse client) and on Luminous RC ...
- 02:22 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- I tried on latest Luminous RC + 4.12 kernel client. I got about 7000 opens/second in two nodes read-write case.
did ... - 02:00 PM Bug #20945: get_quota_root sends lookupname op for every buffered write
- Our user confirmed that without client-quota their job finishes quickly:...
- 01:46 PM Bug #20945 (Resolved): get_quota_root sends lookupname op for every buffered write
- We have a CAD use-case (hspice) which sees very slow buffered writes, apparently due to the quota code. (We haven't y...
08/07/2017
- 05:15 PM Feature #20885: add syntax for generating OSD/MDS auth caps for cephfs
- PR to master was https://github.com/ceph/ceph/pull/16761
- 03:33 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
- When accessing the same file opened for read/write on multiple nodes via ceph-fuse, performance drops by about 3 orde...
08/05/2017
- 03:34 AM Bug #20852 (Resolved): hadoop on cephfs would report "Invalid argument" when mount on a sub direc...
- 03:33 AM Feature #20885 (Resolved): add syntax for generating OSD/MDS auth caps for cephfs
08/03/2017
- 09:13 PM Fix #20246 (Resolved): Make clog message on scrub errors friendlier.
- 09:11 PM Bug #20799 (Resolved): Races when multiple MDS boot at once
- 09:11 PM Bug #20806 (Resolved): kclient: fails to delete tree during thrashing w/ multimds
- 09:10 PM Bug #20892 (Resolved): qa: FS_DEGRADED spurious health warnings in some sub-suites
- 04:08 AM Bug #20892: qa: FS_DEGRADED spurious health warnings in some sub-suites
- https://github.com/ceph/ceph/pull/16772
- 04:05 AM Bug #20892 (Resolved): qa: FS_DEGRADED spurious health warnings in some sub-suites
- From: /ceph/teuthology-archive/pdonnell-2017-08-02_17:25:29-fs-wip-pdonnell-testing-20170802-distro-basic-smithi/1474...
- 09:09 PM Feature #20760 (Resolved): mds: add perf counters for all mds-to-mds messages
- 09:09 PM Bug #20889 (Resolved): qa: MDS_DAMAGED not whitelisted properly
- 08:36 PM Bug #20889: qa: MDS_DAMAGED not whitelisted properly
- https://github.com/ceph/ceph/pull/16768/
- 02:54 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Webert Lima wrote:
> Just upgraded the other 2 production clusters where the problem tends to happen frequently.
> ... - 02:48 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- Just upgraded the other 2 production clusters where the problem tends to happen frequently.
Will watch from now on. - 02:44 AM Bug #20595 (Resolved): mds: export_pin should be included in `get subtrees` output
- 02:43 AM Bug #20731 (Resolved): "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrade:j...
08/02/2017
- 11:31 PM Bug #20889 (Resolved): qa: MDS_DAMAGED not whitelisted properly
- Due to d12c51ca9129213d53c25a00447af431083ad4c9, grep no longer whitelisted MDS_DAMAGED properly. qa/suites/fs/basic_...
- 03:43 PM Feature #20885 (Resolved): add syntax for generating OSD/MDS auth caps for cephfs
- Add a simpler method for generating MDS auth caps based on filesystem name.
https://bugzilla.redhat.com/show_bug.c... - 03:36 AM Feature #20760 (Fix Under Review): mds: add perf counters for all mds-to-mds messages
- https://github.com/ceph/ceph/pull/16743
08/01/2017
- 10:54 AM Feature #20607 (Resolved): MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
- 10:48 AM Backport #20026 (Resolved): kraken: cephfs: MDS became unresponsive when truncating a very large ...
- 07:20 AM Support #20788 (Closed): MDS report "failed to open ino 10007be02d9 err -61/0" and can not restar...
07/31/2017
- 10:42 PM Bug #20595 (Fix Under Review): mds: export_pin should be included in `get subtrees` output
- https://github.com/ceph/ceph/pull/16714
- 09:54 PM Feature #19230 (Resolved): Limit MDS deactivation to one at a time
- Mon enforces this since 2c08f58ee8353322a342ce043150aafc8dd9c381.
- 09:48 PM Bug #20731: "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrade:jewel-x-lumi...
- PR: https://github.com/ceph/ceph/pull/16713
- 08:57 PM Bug #20731 (In Progress): "[ERR] : Health check failed: 1 mds daemon down (MDS_FAILED)" in upgrad...
- Obviously this error is expected when restarting the MDS. We should whitelist the warning.
- 09:05 PM Subtask #20864: kill allow_multimds
- Removing allow_multimds seems reasonable. [Of course, the command should remain a deprecated no-op for deployment com...
- 06:05 PM Subtask #20864 (Resolved): kill allow_multimds
- At this point, allow_multimds is now the default. Under this proposal, its effect is exactly the same as setting max_...
- 10:44 AM Bug #20535: mds segmentation fault ceph_lock_state_t::get_overlapping_locks
- One of our production clusters upgraded.
Next one scheduled for next Wednesday, August 2nd.
07/30/2017
- 05:30 AM Bug #20852: hadoop on cephfs would report "Invalid argument" when mount on a sub directory
- https://github.com/ceph/ceph/pull/16671
07/29/2017
- 10:46 AM Bug #20852 (Resolved): hadoop on cephfs would report "Invalid argument" when mount on a sub direc...
- we hava tested hadoop on cephfs and hbase on cephfs.
and we got following stack on hbase:
Failed to become active m... - 02:47 AM Feature #20851 (New): cephfs fuse support "secret" option
- we know that cephfs kernel state mount support shows the "secret",
example:...
07/28/2017
- 04:57 PM Bug #20805 (Resolved): qa: test_client_limits waiting for wrong health warning
- 04:57 PM Bug #20677 (Resolved): mds: abrt during migration
- 01:38 PM Bug #20806 (Fix Under Review): kclient: fails to delete tree during thrashing w/ multimds
- https://github.com/ceph/ceph/pull/16654
- 07:46 AM Bug #20806 (In Progress): kclient: fails to delete tree during thrashing w/ multimds
- it's caused by bug in "open inode by inode number" function
- 07:33 AM Support #20788: MDS report "failed to open ino 10007be02d9 err -61/0" and can not restart success
- now we have figure out the reason:
it's killed by docker when mds reach its memory limit
thanks for your help! - 07:11 AM Support #20788: MDS report "failed to open ino 10007be02d9 err -61/0" and can not restart success
- "failed to open ino" is a normal when mds is recovery. what do you mean "ceph can not restart"? mds crashed or mds hu...
- 06:16 AM Backport #20823 (Resolved): jewel: client::mkdirs not handle well when two clients send mkdir req...
- https://github.com/ceph/ceph/pull/20271
- 02:29 AM Bug #20566 (Resolved): "MDS health message (mds.0): Behind on trimming" in powercycle tests
- 12:30 AM Bug #20792: cephfs: ceph fs new is err when no default rbd pool
- Maybe is my version have problem,I check it.
thank you, John and sage.
Also available in: Atom