Activity
From 05/29/2016 to 06/27/2016
06/27/2016
- 08:01 PM Bug #16288 (In Progress): mds: `session evict` tell command blocks forever with async messenger (...
- Still no reproducer, but
https://github.com/ceph/ceph/pull/9971
may help. - 01:44 PM Bug #16407: LibCephFS.UseUnmounted failed
- Can you update us? Where are you you seeing the issue and is there a new fix PR?
- 09:13 AM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- Could it be via following paths to call MDSDaemon::ms_handle_reset() like async msgr?
One mds thread: ... -> Simpl... - 03:44 AM Bug #16186: kclient: drops requests without poking system calls on reconnect
- there is a 'ceph daemon mds.xxx session evict' command, which makes mds close client session. (use 'ceph daemon mds.x...
06/25/2016
- 05:32 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- Ok, I tried reproducing this by issuing a stat() while outbound traffic from the client was blocked (on a v4.7-rc4 ke...
06/24/2016
- 08:21 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I don't suppose we have a way to reproduce this, do we? Maybe drive a lot of MDS ops and continually stop and restart...
- 05:08 PM Feature #11171 (Fix Under Review): Path filtering on "dump cache" asok
- https://github.com/ceph/ceph/pull/9925
- 10:15 AM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- Interesting, #16396 is with async messenger (and is probably the issue we're seeing in current master testing), but w...
- 03:12 AM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- Hi guys,
Looks like this issue is very similar to this one here: http://tracker.ceph.com/issues/16396 - 10:07 AM Feature #16468 (Resolved): kclient: Exclude ceph.* xattr namespace in listxattr
- See this thread: http://www.spinics.net/lists/ceph-devel/msg30948.html
Some userspaces tools (notably rsync) try t... - 10:06 AM Feature #16467 (New): ceph-fuse: Exclude ceph.* xattr namespace in listxattr
- See this thread: http://www.spinics.net/lists/ceph-devel/msg30948.html
Some userspaces tools (notably rsync) try t...
06/23/2016
- 07:44 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- Well, if we have unsafe requests the MDS will in fact have committed them (assuming the MDS didn't crash or something...
- 01:53 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- If the mds has torn down the client's session, then I don't see what can reasonably be done other than to return an e...
- 06:33 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- Not to take away Doug's thunder, but I gather he's been unable to reproduce it. The AsyncMessenger may have already b...
- 05:44 PM Bug #15921: segfault in cephfs-journal-tool (TestJournalRepair failure)
- As far as I can tell, we don't even have the backtrace of the segfault in either of those logs, and the sha1 isn't av...
- 01:20 PM Bug #16013 (Resolved): Failing file operations on kernel based cephfs mount point leaves unaccess...
- 11:59 AM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- I don't know if I should open a new issue for this, but it looks like even with another ID something is still wrong:
... - 04:51 AM Bug #16396: Fix shutting down mds timed-out due to deadlock
- https://github.com/ceph/ceph/pull/9884
06/22/2016
- 09:09 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- But if we restart requests from scratch, we're dramatically re-ordering them. We can seemingly send files back in tim...
- 09:01 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I think it is working the way it is supposed to work.
We skip unsafe requests because the mds already got them and... - 08:59 PM Bug #16407: LibCephFS.UseUnmounted failed
- You appear to have closed your own PR. And generally speaking we pass around negative error numbers, so readdir() is ...
- 08:44 AM Bug #16407: LibCephFS.UseUnmounted failed
- https://github.com/ceph/ceph/pull/9860
- 07:36 AM Bug #16407 (Rejected): LibCephFS.UseUnmounted failed
- 2016-06-22T15:03:06.176 INFO:tasks.workunit.client.0.plana146.stdout:[ RUN ] LibCephFS.StripeUnitGran
2016-06-2... - 08:55 PM Support #16043 (Closed): MDS is crashed
- 07:40 PM Feature #16228: Create teuthology task for Samba ping_pong test
- (Copied from #16417) See Greg's draft https://github.com/gregsfortytwo/ceph-qa-suite/tree/wip-pingpong
- 07:40 PM Feature #16417 (Duplicate): test pingpong on ceph-fuse
- 05:10 PM Feature #16417 (Duplicate): test pingpong on ceph-fuse
- See #12653. We should integrate pingpong into our nightly test suite, to verify consistency on the kernel client and ...
- 06:10 PM Feature #16419: add statx-like interface to libcephfs
- Yeah, that's what I mean. We have ceph_ll_getattr now (afaict), so we need something like a ceph_ll_getattrx (that na...
- 06:01 PM Feature #16419: add statx-like interface to libcephfs
- Jeff Layton wrote:
> What I'm thinking is that we should add something along the lines of what David Howells has pro... - 05:39 PM Feature #16419: add statx-like interface to libcephfs
- What I'm thinking is that we should add something along the lines of what David Howells has proposed for the new stat...
- 05:35 PM Feature #16419 (Resolved): add statx-like interface to libcephfs
- samba, in particular, can make use of the birthtime for an inode. Have ceph track the btime in the inode and provide ...
- 01:01 PM Feature #15615: CephFSVolumeClient: List authorized IDs by share
- https://github.com/ceph/ceph/pull/9864
06/21/2016
- 02:03 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- Ahh, hmm -- just noticed the "add name" deinal too. Does the path "/proc/net/rpc/auth.unix.ip/channel" even exist? Ma...
- 01:46 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- Looks unrelated to anything ceph-specific. My guess is that this is an selinux policy bug, since rpc.mountd should be...
- 11:56 AM Bug #16397 (Resolved): nfsd selinux denials causing knfs tests to fail
- http://pulpito.ceph.com/teuthology-2016-06-20_17:35:01-knfs-master-testing-basic-mira/267607/
- 11:26 AM Support #16043: MDS is crashed
- I execute...
- 06:05 AM Support #16043: MDS is crashed
- Yes, i try reset journal and sessions.
I run:... - 01:34 AM Support #16043: MDS is crashed
- Yep. So looking through the log, I now see
>mds.2.journal ESession.replay sessionmap 0 < 18884 close client.166758... - 09:36 AM Bug #16396: Fix shutting down mds timed-out due to deadlock
- -https://github.com/ceph/ceph/pull/9841-
- 09:31 AM Bug #16396 (Resolved): Fix shutting down mds timed-out due to deadlock
- This issue was found in jewel when restarting/stopping mds. It took long time for mds to completely stop until mds th...
- 09:02 AM Bug #16288 (New): mds: `session evict` tell command blocks forever with async messenger (TestVolu...
- Oops, I meant to paste to begin with. I think it was this one:
/a/jspray-2016-06-13_14:56:46-fs-wip-jcsp-testing-qu...
06/20/2016
- 08:12 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- 07:57 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Yeah, I expect that Frank's report is the root cause, but wanted to see to make sure. :)
- 08:56 AM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Now easier to read:...
- 08:55 AM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
I have ceph mounted under /mnt/nfs/ceph:
[root@test2202 test]# pwd
/mnt/nfs/ceph/test
[root@test2202 test]# ls ...- 08:08 PM Bug #16288 (Need More Info): mds: `session evict` tell command blocks forever with async messenge...
- 08:08 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- John, do you have any logs? The only failure of this test I can find is http://qa-proxy.ceph.com/teuthology/teutholog...
- 07:12 PM Bug #16288 (In Progress): mds: `session evict` tell command blocks forever with async messenger (...
- 01:31 PM Bug #16042 (In Progress): MDS Deadlock on shutdown active rank while busy with metadata IO
- 09:35 AM Support #16043: MDS is crashed
- Greg, I sent message with link to my debug log on your email. Service for ceph-post-file working has becomes unstable...
06/17/2016
- 08:46 PM Bug #16164: mds: enforce a dirfrag limit on entries
- PR here: https://github.com/ceph/ceph/pull/9789
- 05:50 PM Bug #16367 (Need More Info): libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- 05:49 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Can you please:
1) run ls -lha on the director you're testing in
2) do your tests
3) run ls -lha on all the releva... - 03:15 PM Bug #16367 (Resolved): libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Testing with ganesha 2.4-o-dev20 and libcephfs 10.2.1:
I did set root squash on in the ganesha.conf, but as root I c... - 05:14 PM Support #16043: MDS is crashed
- Please set "debug mds = 20" and "debug mds log = 20" in your ceph.conf, turn it on, and then upload the mds log file ...
- 04:04 AM Bug #16358 (Fix Under Review): Session::check_access() is buggy
- https://github.com/ceph/ceph/pull/9769
- 03:53 AM Bug #16358 (Resolved): Session::check_access() is buggy
- It calls CInode::make_path_string(path, false, in->get_projected_parent_dn()). The second argument 'false' makes the ...
06/16/2016
- 03:14 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- > The loop you're seeing presumably is only occurring when /etc/ceph/ceph.client-admin.keyring has been removed.
e... - 03:05 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- The difference between @"allow"@ and @"allow *"@ is that the @"*"@ is necessary in more recent versions to issue 'tel...
- 02:39 PM Fix #16276: Update TestSessionMap.test_mount_conn_close for async messenger
- NB back out part of https://github.com/ceph/ceph-qa-suite/pull/1054 when fixing this, it's switched back to simple me...
- 02:29 PM Fix #16276: Update TestSessionMap.test_mount_conn_close for async messenger
- http://pulpito.ceph.com/gregf-2016-06-10_19:20:53-fs-greg-fs-testing-610---basic-mira/250875/
- 02:39 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- NB back out part of https://github.com/ceph/ceph-qa-suite/pull/1054 when fixing this, it's switched back to simple me...
- 02:38 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- This deadlocks and lockdep makes it crash in our nightlies; we should fix it quickly! :)
- 02:37 PM Feature #14271 (Resolved): directory listing: do not reset when fragmenting
- 02:33 PM Support #16043: MDS is crashed
- ...
- 02:31 PM Support #16043: MDS is crashed
- I upgraded my cluster to 10.2.2, situation not changed.
- 01:57 PM Support #16043 (Need More Info): MDS is crashed
- This probably isn't an issue any more, but if it is upgrade to 10.2.2 and report back if it's still an issue.
- 02:26 PM Feature #11171 (In Progress): Path filtering on "dump cache" asok
- 02:21 PM Backport #16284 (Resolved): jewel: directory listing: do not reset when fragmenting
- This was done as part of #16251.
- 11:54 AM Bug #16298 (Resolved): mds: failure in tasks/migration.yaml
- 11:15 AM Bug #16322: ceph mds getting killed for no reason
- $gdb /usr/local/bin/ceph-mds
If gdb does not say "no debugging symbols found", the debug package is properly insta... - 09:45 AM Bug #16322: ceph mds getting killed for no reason
- Zheng Yan wrote:
> Your ceph-mds does not contain debuginfo, please install debuginfo package first. then start ceph... - 02:20 AM Bug #16322: ceph mds getting killed for no reason
- Your ceph-mds does not contain debuginfo, please install debuginfo package first. then start ceph-mds manually with c...
- 07:39 AM Backport #16136: jewel: MDSMonitor fixes
- Original description:
These two commits:
https://github.com/ceph/ceph/pull/9418/commits/24b82bafffced97384135e5...
06/15/2016
- 06:31 PM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- This is rearing its head in general testing now:
http://pulpito.ceph.com/jspray-2016-06-15_05:28:02-fs-wip-jcsp-test... - 02:01 PM Bug #16322: ceph mds getting killed for no reason
- log: http://95.211.209.196/imgs/ceph-mds.mds01.log
- 01:48 PM Bug #16322: ceph mds getting killed for no reason
- kernel: 4.2.0-36-generic
- 01:46 PM Bug #16322: ceph mds getting killed for no reason
(...)
Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/n...- 01:41 PM Bug #16322: ceph mds getting killed for no reason
- I am not very experience with gdb, sorry. Should I use it in ceph-mds ?
I will paste the whole log (it has a lot of ... - 12:16 PM Bug #16322: ceph mds getting killed for no reason
- could you enable coredump and use gdb to check which line causes the crash
- 11:50 AM Bug #16322: ceph mds getting killed for no reason
- add:
2016-06-15 03:15:51.017714 7f582103f700 -1 *** Caught signal (Aborted) **
in thread 7f582103f700 thread_nam... - 11:50 AM Bug #16322 (Can't reproduce): ceph mds getting killed for no reason
- Hello,
my ceph mds get killed for no reason (normally they do the active failover).
Log:
ceph version 10.2.1 (... - 10:11 AM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- John, thank you very much! Yeah, I saw that it was going to miss 10.2.2. Thank you for making this exception! I'll st...
- 08:06 AM Backport #16320 (Resolved): jewel: fs: fuse mounted file systems fails SAMBA CTDB ping_pong rw te...
- https://github.com/ceph/ceph/pull/10108
- 08:04 AM Backport #16313 (Resolved): jewel: client: FAILED assert(root_ancestor->qtree == __null)
- https://github.com/ceph/ceph/pull/10107
- 02:21 AM Bug #16160 (Resolved): PJD failures on Jewel
- http://qa-proxy.ceph.com/teuthology/teuthology-2016-06-13_17:25:02-kcephfs-master-testing-basic-mira/257158/teutholog...
06/14/2016
- 04:30 PM Bug #12653 (Pending Backport): fuse mounted file systems fails SAMBA CTDB ping_pong rw test with ...
- 04:29 PM Documentation #16300: doc: fuse_disable_pagecache
- NB while doing this would be useful to ask performance team to measure how much impact this really has
- 04:28 PM Documentation #16300 (Resolved): doc: fuse_disable_pagecache
- http://tracker.ceph.com/issues/12653
https://github.com/ceph/ceph/pull/5521/commits/0f11ec237d4692d313a038ed61aa07a3... - 04:24 PM Backport #16299 (Resolved): jewel: mds: fix SnapRealm::have_past_parents_open()
- https://github.com/ceph/ceph/pull/10499
- 04:22 PM Bug #16298 (Fix Under Review): mds: failure in tasks/migration.yaml
- https://github.com/ceph/ceph/pull/9697
- 04:20 PM Bug #16298 (Resolved): mds: failure in tasks/migration.yaml
- http://pulpito.ceph.com/jspray-2016-06-14_01:19:46-fs-wip-jcsp-testing-20160610-distro-basic-mira/257906
- 04:19 PM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
I've pushed a jewel-15920 branch for you with the fix cherry-picked onto it. (don't usually do this, but it's fair...- 01:56 PM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- Good morning everyone!
Considering that a backport is done, though not merged yet, is there away for me to get a g... - 02:50 PM Backport #13927 (New): hammer: cephfs-java ftruncate unit test failure
- 02:50 PM Backport #13927: hammer: cephfs-java ftruncate unit test failure
- One attempted backport https://github.com/ceph/ceph/pull/6754 was closed.
- 12:30 PM Bug #16067 (Resolved): client: InvalidWrite in put_qtree
- (resolved via http://tracker.ceph.com/issues/16066, track backport there)
- 12:30 PM Bug #16066 (Pending Backport): client: FAILED assert(root_ancestor->qtree == __null)
- 10:13 AM Bug #16288 (Resolved): mds: `session evict` tell command blocks forever with async messenger (Tes...
I'm assuming for the moment that this is an MDS bug rather than something getting dropped in the new messenger code...- 07:19 AM Backport #16284 (Resolved): jewel: directory listing: do not reset when fragmenting
- https://github.com/ceph/ceph/pull/9655
- 12:24 AM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- I just saw this (or similar shutdown bug) for the first time in an automated test: http://qa-proxy.ceph.com/teutholog...
- 12:20 AM Fix #16276 (New): Update TestSessionMap.test_mount_conn_close for async messenger
When the default messenger changed from simple to async, this test started failing[1]. It's because it is using th...
06/13/2016
- 10:42 AM Feature #14271 (Pending Backport): directory listing: do not reset when fragmenting
- 08:27 AM Backport #16252 (Resolved): jewel: Client: reports that readahead is not working
- 05:05 AM Backport #16252 (In Progress): jewel: Client: reports that readahead is not working
- 05:00 AM Backport #16252 (Resolved): jewel: Client: reports that readahead is not working
- https://github.com/ceph/ceph/pull/9656
- 08:27 AM Bug #16024 (Resolved): Client: reports that readahead is not working
- 08:23 AM Backport #16251 (Resolved): jewel: client: simultaneous readdirs are very racy
- 04:55 AM Backport #16251 (In Progress): jewel: client: simultaneous readdirs are very racy
- 04:54 AM Backport #16251 (Resolved): jewel: client: simultaneous readdirs are very racy
- https://github.com/ceph/ceph/pull/9655
- 08:23 AM Bug #15508 (Resolved): client: simultaneous readdirs are very racy
- 05:41 AM Bug #16255 (Resolved): ceph-create-keys: sometimes blocks forever if mds "allow" is set
- The documentations at:
http://docs.ceph.com/docs/master/dev/mon-bootstrap/
tells to create the client.admin key...
06/12/2016
- 09:45 PM Bug #16024 (Pending Backport): Client: reports that readahead is not working
- Backport PR: https://github.com/ceph/ceph/pull/9656
- 09:35 PM Bug #15508 (Pending Backport): client: simultaneous readdirs are very racy
- Backport PR: https://github.com/ceph/ceph/pull/9655
06/10/2016
- 09:55 AM Feature #16228 (New): Create teuthology task for Samba ping_pong test
The Samba ping_pong test validates the interaction between multiple clients accessing the same data.
Related:
h...
06/09/2016
- 09:03 PM Bug #16067: client: InvalidWrite in put_qtree
- Greg: yes, I expect the big quotatree patch will fix both.
- 04:32 PM Bug #16067: client: InvalidWrite in put_qtree
- Any chance this is because of #16066, or at least resolved by the associated PR?
- 07:40 PM Feature #16219 (New): test: smallfile benchmark tool
- Run this metadata tester in our nightlies.
https://github.com/bengland2/smallfile
>smallfile is a python-based ... - 05:38 PM Backport #16215 (Resolved): jewel: client: crash in unmount when fuse_use_invalidate_cb is enabled
- https://github.com/ceph/ceph/pull/10106
- 09:49 AM Cleanup #15922 (Resolved): MDS: remove TMAP support from CDir
- 09:36 AM Bug #16137 (Pending Backport): client: crash in unmount when fuse_use_invalidate_cb is enabled
06/08/2016
- 08:29 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I'm concerned to hear that; I thought those patches had been zapped from the queue. If we disconnect and reconnect, r...
- 01:49 AM Bug #16186: kclient: drops requests without poking system calls on reconnect
- Client only drops unsafe MDS requests after session reset. It also tries re-sending outstanding requests
please ... - 01:04 AM Bug #16186 (Duplicate): kclient: drops requests without poking system calls on reconnect
- If I'm understanding the way things currently work:
*) kernel client loses network connection
*) MDS times out kern... - 08:18 PM Bug #16024: Client: reports that readahead is not working
- Test is here: https://github.com/ceph/ceph-qa-suite/pull/1046
Greg is adding that to his test branch. - 01:42 PM Bug #16164: mds: enforce a dirfrag limit on entries
- He's got more info in http://tracker.ceph.com/issues/16177.
Basically, a CephFS user created directories large eno... - 05:10 AM Bug #16164: mds: enforce a dirfrag limit on entries
- Greg Farnum wrote:
> Hmm, I was talking to m0zes (whose situation kicked off this bug) and it turns out the objects ... - 12:41 PM Cleanup #16195 (Resolved): mds: Don't spam log with standby_replay_restart messages
- ...
- 10:25 AM Bug #16066 (Fix Under Review): client: FAILED assert(root_ancestor->qtree == __null)
- 10:24 AM Bug #16066: client: FAILED assert(root_ancestor->qtree == __null)
- https://github.com/ceph/ceph/pull/9591
06/07/2016
- 09:57 PM Bug #16164: mds: enforce a dirfrag limit on entries
- Hmm, I was talking to m0zes (whose situation kicked off this bug) and it turns out the objects actually causing the i...
- 04:15 PM Bug #15266 (Resolved): ceph_volume_client purge failing on non-ascii filenames
- 02:47 PM Bug #16022: MDSMonitor::check_subs() is very buggy
- Abhishek: yes, that should be backported too. It's not strictly necessary but is worthwhile.
- 02:35 PM Bug #16022: MDSMonitor::check_subs() is very buggy
- Does https://github.com/ceph/ceph-qa-suite/pull/1018 need to be packported to Jewel too? Please confirm.
- 02:47 PM Backport #16152 (In Progress): jewel: fs: client: fstat cap release
- 02:44 PM Backport #16136 (In Progress): jewel: MDSMonitor fixes
- 02:41 PM Backport #16135 (In Progress): jewel: MDS: fix getattr starve setattr
- 02:38 PM Backport #16041 (In Progress): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 02:23 PM Backport #15999 (In Progress): jewel: CephFSVolumeClient: read-only authorization for volumes
- 02:06 PM Backport #15898 (In Progress): jewel: Confusing MDS log message when shut down with stalled journ...
- 01:31 PM Backport #15968 (Fix Under Review): jewel: ceph status mds output ignores active MDS when there i...
- https://github.com/ceph/ceph/pull/9547
- 01:28 PM Backport #15971 (Resolved): jewel: ceph_volume_client purge failing on non-ascii filenames
- 01:24 PM Bug #16160: PJD failures on Jewel
- Cool, I'll close this when our nightlies are passing again.
- 01:21 PM Bug #16160: PJD failures on Jewel
- VFS maintainer proposed a fix. The issue should be fixed in 4.7-rc3 kernel
- 01:53 AM Bug #16160: PJD failures on Jewel
- Sorry, I mean 4.7-rc1,it's the newest rc kernel.
06/06/2016
- 08:52 PM Bug #16160: PJD failures on Jewel
- The failing test file is pretty short; the important bit is...
- 12:32 PM Bug #16160: PJD failures on Jewel
- This issue happens only on 3.7.0-rc1 kernel
- 11:41 AM Bug #16160 (Resolved): PJD failures on Jewel
- http://qa-proxy.ceph.com/teuthology/jspray-2016-06-03_06:24:38-fs:basic-jewel---basic-mira/232657/teuthology.log
htt... - 06:49 PM Bug #16164 (In Progress): mds: enforce a dirfrag limit on entries
- 04:36 PM Bug #16164: mds: enforce a dirfrag limit on entries
- I'm taking a look at this one.
- 01:49 PM Bug #16164 (Resolved): mds: enforce a dirfrag limit on entries
- - add a new config option to cap the number of entries in a difrag
- set the limit an order of magnitude higher than... - 11:45 AM Feature #15417 (Resolved): Make path prefix ("/volumes") in CephFSVolumeClient configurable
- 11:44 AM Backport #15854 (Resolved): jewel: Make path prefix ("/volumes") in CephFSVolumeClient configurable
- 11:42 AM Feature #15599 (Resolved): quota: Generate client df from quota, when using subdirectory mount
- 11:38 AM Backport #16065 (Resolved): jewel: quota: Generate client df from quota, when using subdirectory ...
- 11:38 AM Backport #16065: jewel: quota: Generate client df from quota, when using subdirectory mount
- h3. original description
This is an enabler for Manila, creating this ticket to track backport to Jewel after land... - 09:58 AM Bug #16066: client: FAILED assert(root_ancestor->qtree == __null)
- The quota code expects that directory is alway connected to the FS hierarchy. But this is not true, we can create dis...
- 08:55 AM Bug #16137 (Fix Under Review): client: crash in unmount when fuse_use_invalidate_cb is enabled
- https://github.com/ceph/ceph/pull/9509
06/03/2016
- 05:44 PM Backport #16135: jewel: MDS: fix getattr starve setattr
- h3. original description
To backport this fix (https://github.com/ceph/ceph/pull/8965) to Jewel - 10:44 AM Backport #16135 (Resolved): jewel: MDS: fix getattr starve setattr
- https://github.com/ceph/ceph/pull/9560
- 05:38 PM Bug #16154 (Resolved): mds: lock waiters are not finished in the same order that they were added
- https://github.com/ceph/ceph/pull/8965
- 04:54 PM Backport #16152 (Resolved): jewel: fs: client: fstat cap release
- https://github.com/ceph/ceph/pull/9562
- 02:55 PM Cleanup #16144 (Resolved): Remove cephfs-data-scan tmap_upgrade
- This was something that was only for jewel only (the last release to have tmaps in rados).
We can remove it for Kr... - 12:57 PM Bug #9904 (Resolved): Don't crash MDS on clients sending messages with bad seq
- https://github.com/ceph/ceph/pull/9214
- 12:56 PM Bug #8255 (New): mds: directory with missing object cannot be removed
- 12:56 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- I'm reverting state to verified because although we've merged the patch for this, it still needs more attention to ha...
- 12:08 PM Bug #15045 (Resolved): CephFSVolumeClient.evict should be limited by path, not just auth ID
- 12:07 PM Backport #15855 (Resolved): jewel: CephFSVolumeClient.evict should be limited by path, not just a...
- 11:32 AM Bug #16137 (Resolved): client: crash in unmount when fuse_use_invalidate_cb is enabled
- Seen in testing of https://github.com/ceph/ceph/pull/8890...
- 10:51 AM Bug #15723 (Pending Backport): client: fstat cap release
- 10:46 AM Backport #16136 (Resolved): jewel: MDSMonitor fixes
Backported as described : https://github.com/ceph/ceph/pull/9561
06/02/2016
- 10:40 AM Cleanup #15922 (Fix Under Review): MDS: remove TMAP support from CDir
- https://github.com/ceph/ceph/pull/9443
06/01/2016
- 10:15 PM Backport #16065 (In Progress): jewel: quota: Generate client df from quota, when using subdirecto...
- https://github.com/ceph/ceph/pull/9430
- 10:15 PM Backport #15971 (In Progress): jewel: ceph_volume_client purge failing on non-ascii filenames
- https://github.com/ceph/ceph/pull/9430
- 04:57 PM Bug #90: mds: don't sync log on every clientreplay request
- Yeah. So, I think there was a reason for this. We definitely need to flush out any clientreplay requests before exiti...
- 08:26 AM Bug #90: mds: don't sync log on every clientreplay request
The code that flush journal. https://github.com/ceph/ceph/blob/8ce337e3552004fc4853c0c94f33235da4caa5df/src/mds/Ser...- 02:39 AM Feature #3575: ceph-fuse: Add support for forget_multi
- the kernel sends forget_multi request to libfuse even we don't provide the forget_multi callback.
05/31/2016
- 09:30 PM Bug #90: mds: don't sync log on every clientreplay request
- It might be fixed; we should check. But once upon a time (I think maybe still now?) then when in clientreplay set the...
- 11:04 AM Bug #90: mds: don't sync log on every clientreplay request
- Is this definitely still an issue? I'm not seeing a particular place in the code where we do something different wit...
- 09:28 PM Feature #83: mds: rename over old files should flush data or revert to old contents?
- I think it does still exist?
We have an existing foo.conf, inode x
Write to foo.conf.tmp, inode y
rename foo.conf.... - 10:59 AM Feature #83: mds: rename over old files should flush data or revert to old contents?
- The original ticket is describing a real bug (that no longer exists), right? It's not clear to me that there's still...
- 08:01 PM Feature #3575: ceph-fuse: Add support for forget_multi
- Hmm, doesn't the syscall overhead make enough of a difference to be noticeable in some of our bigger invalidate ops?
- 11:43 AM Feature #15067 (Fix Under Review): mon: client: multifs: enable clients to map a filesystem name ...
- https://github.com/ceph/ceph/pull/8386
https://github.com/ceph/ceph-qa-suite/pull/924
- 11:41 AM Feature #7326: qa: fix flock tests
- Me neither. Sage?
- 11:39 AM Feature #2097 (Rejected): mds: 'ceph mds activate <gid>'
- I think we don't.
- 11:39 AM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- I think most or all of the hooks are there, should be a case of taking suites/fs/recovery/tasks/client-limits.yaml an...
- 11:35 AM Feature #7764 (Resolved): InoTable/SessionMap/ manipulator (cephfs-table-tool)
- So cephfs-table-tool has resetting tables, and it has consuming inodes from InoTable.
There probably are more thin... - 11:34 AM Feature #7762 (Rejected): journal-tool: backwards-search after corrupt regions
- Hmm, iirc this ticket was about doing something smarter than scanning forward for a sentinel, where we would potentia...
- 11:19 AM Feature #15400 (Resolved): CephFSVolumeClient should isolate volumes by RADOS namespace
- 11:18 AM Feature #7760 (Resolved): journal-tool: implement splice
- Yep!
- 11:18 AM Feature #7758 (Resolved): journal-tool: complete filtering
- Nope!
- 11:16 AM Feature #7318 (Duplicate): qa: ceph-fuse + sync mode
- Hmm, #4022 seems to be about globally disabling cacher (as an alternative way to test direct IO paths) whereas this o...
- 11:11 AM Backport #16065 (New): jewel: quota: Generate client df from quota, when using subdirectory mount
- Switching back to New because there does not seem to be a pull request for this yet.
- 09:51 AM Backport #16083 (In Progress): jewel: mds: wrongly treat symlink inode as normal file/dir when sy...
- 07:45 AM Backport #16083 (Resolved): jewel: mds: wrongly treat symlink inode as normal file/dir when symli...
- https://github.com/ceph/ceph/pull/9405
- 09:50 AM Backport #16082 (In Progress): hammer: mds: wrongly treat symlink inode as normal file/dir when s...
- 07:45 AM Backport #16082 (Resolved): hammer: mds: wrongly treat symlink inode as normal file/dir when syml...
- https://github.com/ceph/ceph/pull/9404
05/30/2016
- 12:28 PM Feature #3575: ceph-fuse: Add support for forget_multi
- If there is not forget_multi() callback, libfuse calls the forget() callback in a loop. I don't think implementing fo...
Also available in: Atom