Activity
From 05/24/2016 to 06/22/2016
06/22/2016
- 09:09 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- But if we restart requests from scratch, we're dramatically re-ordering them. We can seemingly send files back in tim...
- 09:01 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I think it is working the way it is supposed to work.
We skip unsafe requests because the mds already got them and... - 08:59 PM Bug #16407: LibCephFS.UseUnmounted failed
- You appear to have closed your own PR. And generally speaking we pass around negative error numbers, so readdir() is ...
- 08:44 AM Bug #16407: LibCephFS.UseUnmounted failed
- https://github.com/ceph/ceph/pull/9860
- 07:36 AM Bug #16407 (Rejected): LibCephFS.UseUnmounted failed
- 2016-06-22T15:03:06.176 INFO:tasks.workunit.client.0.plana146.stdout:[ RUN ] LibCephFS.StripeUnitGran
2016-06-2... - 08:55 PM Support #16043 (Closed): MDS is crashed
- 07:40 PM Feature #16228: Create teuthology task for Samba ping_pong test
- (Copied from #16417) See Greg's draft https://github.com/gregsfortytwo/ceph-qa-suite/tree/wip-pingpong
- 07:40 PM Feature #16417 (Duplicate): test pingpong on ceph-fuse
- 05:10 PM Feature #16417 (Duplicate): test pingpong on ceph-fuse
- See #12653. We should integrate pingpong into our nightly test suite, to verify consistency on the kernel client and ...
- 06:10 PM Feature #16419: add statx-like interface to libcephfs
- Yeah, that's what I mean. We have ceph_ll_getattr now (afaict), so we need something like a ceph_ll_getattrx (that na...
- 06:01 PM Feature #16419: add statx-like interface to libcephfs
- Jeff Layton wrote:
> What I'm thinking is that we should add something along the lines of what David Howells has pro... - 05:39 PM Feature #16419: add statx-like interface to libcephfs
- What I'm thinking is that we should add something along the lines of what David Howells has proposed for the new stat...
- 05:35 PM Feature #16419 (Resolved): add statx-like interface to libcephfs
- samba, in particular, can make use of the birthtime for an inode. Have ceph track the btime in the inode and provide ...
- 01:01 PM Feature #15615: CephFSVolumeClient: List authorized IDs by share
- https://github.com/ceph/ceph/pull/9864
06/21/2016
- 02:03 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- Ahh, hmm -- just noticed the "add name" deinal too. Does the path "/proc/net/rpc/auth.unix.ip/channel" even exist? Ma...
- 01:46 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- Looks unrelated to anything ceph-specific. My guess is that this is an selinux policy bug, since rpc.mountd should be...
- 11:56 AM Bug #16397 (Resolved): nfsd selinux denials causing knfs tests to fail
- http://pulpito.ceph.com/teuthology-2016-06-20_17:35:01-knfs-master-testing-basic-mira/267607/
- 11:26 AM Support #16043: MDS is crashed
- I execute...
- 06:05 AM Support #16043: MDS is crashed
- Yes, i try reset journal and sessions.
I run:... - 01:34 AM Support #16043: MDS is crashed
- Yep. So looking through the log, I now see
>mds.2.journal ESession.replay sessionmap 0 < 18884 close client.166758... - 09:36 AM Bug #16396: Fix shutting down mds timed-out due to deadlock
- -https://github.com/ceph/ceph/pull/9841-
- 09:31 AM Bug #16396 (Resolved): Fix shutting down mds timed-out due to deadlock
- This issue was found in jewel when restarting/stopping mds. It took long time for mds to completely stop until mds th...
- 09:02 AM Bug #16288 (New): mds: `session evict` tell command blocks forever with async messenger (TestVolu...
- Oops, I meant to paste to begin with. I think it was this one:
/a/jspray-2016-06-13_14:56:46-fs-wip-jcsp-testing-qu...
06/20/2016
- 08:12 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- 07:57 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Yeah, I expect that Frank's report is the root cause, but wanted to see to make sure. :)
- 08:56 AM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Now easier to read:...
- 08:55 AM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
I have ceph mounted under /mnt/nfs/ceph:
[root@test2202 test]# pwd
/mnt/nfs/ceph/test
[root@test2202 test]# ls ...- 08:08 PM Bug #16288 (Need More Info): mds: `session evict` tell command blocks forever with async messenge...
- 08:08 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- John, do you have any logs? The only failure of this test I can find is http://qa-proxy.ceph.com/teuthology/teutholog...
- 07:12 PM Bug #16288 (In Progress): mds: `session evict` tell command blocks forever with async messenger (...
- 01:31 PM Bug #16042 (In Progress): MDS Deadlock on shutdown active rank while busy with metadata IO
- 09:35 AM Support #16043: MDS is crashed
- Greg, I sent message with link to my debug log on your email. Service for ceph-post-file working has becomes unstable...
06/17/2016
- 08:46 PM Bug #16164: mds: enforce a dirfrag limit on entries
- PR here: https://github.com/ceph/ceph/pull/9789
- 05:50 PM Bug #16367 (Need More Info): libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- 05:49 PM Bug #16367: libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Can you please:
1) run ls -lha on the director you're testing in
2) do your tests
3) run ls -lha on all the releva... - 03:15 PM Bug #16367 (Resolved): libcephfs: UID parsing breaks root squash (Ganesha FSAL)
- Testing with ganesha 2.4-o-dev20 and libcephfs 10.2.1:
I did set root squash on in the ganesha.conf, but as root I c... - 05:14 PM Support #16043: MDS is crashed
- Please set "debug mds = 20" and "debug mds log = 20" in your ceph.conf, turn it on, and then upload the mds log file ...
- 04:04 AM Bug #16358 (Fix Under Review): Session::check_access() is buggy
- https://github.com/ceph/ceph/pull/9769
- 03:53 AM Bug #16358 (Resolved): Session::check_access() is buggy
- It calls CInode::make_path_string(path, false, in->get_projected_parent_dn()). The second argument 'false' makes the ...
06/16/2016
- 03:14 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- > The loop you're seeing presumably is only occurring when /etc/ceph/ceph.client-admin.keyring has been removed.
e... - 03:05 PM Bug #16255: ceph-create-keys: sometimes blocks forever if mds "allow" is set
- The difference between @"allow"@ and @"allow *"@ is that the @"*"@ is necessary in more recent versions to issue 'tel...
- 02:39 PM Fix #16276: Update TestSessionMap.test_mount_conn_close for async messenger
- NB back out part of https://github.com/ceph/ceph-qa-suite/pull/1054 when fixing this, it's switched back to simple me...
- 02:29 PM Fix #16276: Update TestSessionMap.test_mount_conn_close for async messenger
- http://pulpito.ceph.com/gregf-2016-06-10_19:20:53-fs-greg-fs-testing-610---basic-mira/250875/
- 02:39 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- NB back out part of https://github.com/ceph/ceph-qa-suite/pull/1054 when fixing this, it's switched back to simple me...
- 02:38 PM Bug #16288: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClie...
- This deadlocks and lockdep makes it crash in our nightlies; we should fix it quickly! :)
- 02:37 PM Feature #14271 (Resolved): directory listing: do not reset when fragmenting
- 02:33 PM Support #16043: MDS is crashed
- ...
- 02:31 PM Support #16043: MDS is crashed
- I upgraded my cluster to 10.2.2, situation not changed.
- 01:57 PM Support #16043 (Need More Info): MDS is crashed
- This probably isn't an issue any more, but if it is upgrade to 10.2.2 and report back if it's still an issue.
- 02:26 PM Feature #11171 (In Progress): Path filtering on "dump cache" asok
- 02:21 PM Backport #16284 (Resolved): jewel: directory listing: do not reset when fragmenting
- This was done as part of #16251.
- 11:54 AM Bug #16298 (Resolved): mds: failure in tasks/migration.yaml
- 11:15 AM Bug #16322: ceph mds getting killed for no reason
- $gdb /usr/local/bin/ceph-mds
If gdb does not say "no debugging symbols found", the debug package is properly insta... - 09:45 AM Bug #16322: ceph mds getting killed for no reason
- Zheng Yan wrote:
> Your ceph-mds does not contain debuginfo, please install debuginfo package first. then start ceph... - 02:20 AM Bug #16322: ceph mds getting killed for no reason
- Your ceph-mds does not contain debuginfo, please install debuginfo package first. then start ceph-mds manually with c...
- 07:39 AM Backport #16136: jewel: MDSMonitor fixes
- Original description:
These two commits:
https://github.com/ceph/ceph/pull/9418/commits/24b82bafffced97384135e5...
06/15/2016
- 06:31 PM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- This is rearing its head in general testing now:
http://pulpito.ceph.com/jspray-2016-06-15_05:28:02-fs-wip-jcsp-test... - 02:01 PM Bug #16322: ceph mds getting killed for no reason
- log: http://95.211.209.196/imgs/ceph-mds.mds01.log
- 01:48 PM Bug #16322: ceph mds getting killed for no reason
- kernel: 4.2.0-36-generic
- 01:46 PM Bug #16322: ceph mds getting killed for no reason
(...)
Loaded symbols for /lib/x86_64-linux-gnu/libnss_files.so.2
Reading symbols from /usr/lib/x86_64-linux-gnu/n...- 01:41 PM Bug #16322: ceph mds getting killed for no reason
- I am not very experience with gdb, sorry. Should I use it in ceph-mds ?
I will paste the whole log (it has a lot of ... - 12:16 PM Bug #16322: ceph mds getting killed for no reason
- could you enable coredump and use gdb to check which line causes the crash
- 11:50 AM Bug #16322: ceph mds getting killed for no reason
- add:
2016-06-15 03:15:51.017714 7f582103f700 -1 *** Caught signal (Aborted) **
in thread 7f582103f700 thread_nam... - 11:50 AM Bug #16322 (Can't reproduce): ceph mds getting killed for no reason
- Hello,
my ceph mds get killed for no reason (normally they do the active failover).
Log:
ceph version 10.2.1 (... - 10:11 AM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- John, thank you very much! Yeah, I saw that it was going to miss 10.2.2. Thank you for making this exception! I'll st...
- 08:06 AM Backport #16320 (Resolved): jewel: fs: fuse mounted file systems fails SAMBA CTDB ping_pong rw te...
- https://github.com/ceph/ceph/pull/10108
- 08:04 AM Backport #16313 (Resolved): jewel: client: FAILED assert(root_ancestor->qtree == __null)
- https://github.com/ceph/ceph/pull/10107
- 02:21 AM Bug #16160 (Resolved): PJD failures on Jewel
- http://qa-proxy.ceph.com/teuthology/teuthology-2016-06-13_17:25:02-kcephfs-master-testing-basic-mira/257158/teutholog...
06/14/2016
- 04:30 PM Bug #12653 (Pending Backport): fuse mounted file systems fails SAMBA CTDB ping_pong rw test with ...
- 04:29 PM Documentation #16300: doc: fuse_disable_pagecache
- NB while doing this would be useful to ask performance team to measure how much impact this really has
- 04:28 PM Documentation #16300 (Resolved): doc: fuse_disable_pagecache
- http://tracker.ceph.com/issues/12653
https://github.com/ceph/ceph/pull/5521/commits/0f11ec237d4692d313a038ed61aa07a3... - 04:24 PM Backport #16299 (Resolved): jewel: mds: fix SnapRealm::have_past_parents_open()
- https://github.com/ceph/ceph/pull/10499
- 04:22 PM Bug #16298 (Fix Under Review): mds: failure in tasks/migration.yaml
- https://github.com/ceph/ceph/pull/9697
- 04:20 PM Bug #16298 (Resolved): mds: failure in tasks/migration.yaml
- http://pulpito.ceph.com/jspray-2016-06-14_01:19:46-fs-wip-jcsp-testing-20160610-distro-basic-mira/257906
- 04:19 PM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
I've pushed a jewel-15920 branch for you with the fix cherry-picked onto it. (don't usually do this, but it's fair...- 01:56 PM Bug #15920: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- Good morning everyone!
Considering that a backport is done, though not merged yet, is there away for me to get a g... - 02:50 PM Backport #13927 (New): hammer: cephfs-java ftruncate unit test failure
- 02:50 PM Backport #13927: hammer: cephfs-java ftruncate unit test failure
- One attempted backport https://github.com/ceph/ceph/pull/6754 was closed.
- 12:30 PM Bug #16067 (Resolved): client: InvalidWrite in put_qtree
- (resolved via http://tracker.ceph.com/issues/16066, track backport there)
- 12:30 PM Bug #16066 (Pending Backport): client: FAILED assert(root_ancestor->qtree == __null)
- 10:13 AM Bug #16288 (Resolved): mds: `session evict` tell command blocks forever with async messenger (Tes...
I'm assuming for the moment that this is an MDS bug rather than something getting dropped in the new messenger code...- 07:19 AM Backport #16284 (Resolved): jewel: directory listing: do not reset when fragmenting
- https://github.com/ceph/ceph/pull/9655
- 12:24 AM Bug #16042: MDS Deadlock on shutdown active rank while busy with metadata IO
- I just saw this (or similar shutdown bug) for the first time in an automated test: http://qa-proxy.ceph.com/teutholog...
- 12:20 AM Fix #16276 (New): Update TestSessionMap.test_mount_conn_close for async messenger
When the default messenger changed from simple to async, this test started failing[1]. It's because it is using th...
06/13/2016
- 10:42 AM Feature #14271 (Pending Backport): directory listing: do not reset when fragmenting
- 08:27 AM Backport #16252 (Resolved): jewel: Client: reports that readahead is not working
- 05:05 AM Backport #16252 (In Progress): jewel: Client: reports that readahead is not working
- 05:00 AM Backport #16252 (Resolved): jewel: Client: reports that readahead is not working
- https://github.com/ceph/ceph/pull/9656
- 08:27 AM Bug #16024 (Resolved): Client: reports that readahead is not working
- 08:23 AM Backport #16251 (Resolved): jewel: client: simultaneous readdirs are very racy
- 04:55 AM Backport #16251 (In Progress): jewel: client: simultaneous readdirs are very racy
- 04:54 AM Backport #16251 (Resolved): jewel: client: simultaneous readdirs are very racy
- https://github.com/ceph/ceph/pull/9655
- 08:23 AM Bug #15508 (Resolved): client: simultaneous readdirs are very racy
- 05:41 AM Bug #16255 (Resolved): ceph-create-keys: sometimes blocks forever if mds "allow" is set
- The documentations at:
http://docs.ceph.com/docs/master/dev/mon-bootstrap/
tells to create the client.admin key...
06/12/2016
- 09:45 PM Bug #16024 (Pending Backport): Client: reports that readahead is not working
- Backport PR: https://github.com/ceph/ceph/pull/9656
- 09:35 PM Bug #15508 (Pending Backport): client: simultaneous readdirs are very racy
- Backport PR: https://github.com/ceph/ceph/pull/9655
06/10/2016
- 09:55 AM Feature #16228 (New): Create teuthology task for Samba ping_pong test
The Samba ping_pong test validates the interaction between multiple clients accessing the same data.
Related:
h...
06/09/2016
- 09:03 PM Bug #16067: client: InvalidWrite in put_qtree
- Greg: yes, I expect the big quotatree patch will fix both.
- 04:32 PM Bug #16067: client: InvalidWrite in put_qtree
- Any chance this is because of #16066, or at least resolved by the associated PR?
- 07:40 PM Feature #16219 (New): test: smallfile benchmark tool
- Run this metadata tester in our nightlies.
https://github.com/bengland2/smallfile
>smallfile is a python-based ... - 05:38 PM Backport #16215 (Resolved): jewel: client: crash in unmount when fuse_use_invalidate_cb is enabled
- https://github.com/ceph/ceph/pull/10106
- 09:49 AM Cleanup #15922 (Resolved): MDS: remove TMAP support from CDir
- 09:36 AM Bug #16137 (Pending Backport): client: crash in unmount when fuse_use_invalidate_cb is enabled
06/08/2016
- 08:29 PM Bug #16186: kclient: drops requests without poking system calls on reconnect
- I'm concerned to hear that; I thought those patches had been zapped from the queue. If we disconnect and reconnect, r...
- 01:49 AM Bug #16186: kclient: drops requests without poking system calls on reconnect
- Client only drops unsafe MDS requests after session reset. It also tries re-sending outstanding requests
please ... - 01:04 AM Bug #16186 (Duplicate): kclient: drops requests without poking system calls on reconnect
- If I'm understanding the way things currently work:
*) kernel client loses network connection
*) MDS times out kern... - 08:18 PM Bug #16024: Client: reports that readahead is not working
- Test is here: https://github.com/ceph/ceph-qa-suite/pull/1046
Greg is adding that to his test branch. - 01:42 PM Bug #16164: mds: enforce a dirfrag limit on entries
- He's got more info in http://tracker.ceph.com/issues/16177.
Basically, a CephFS user created directories large eno... - 05:10 AM Bug #16164: mds: enforce a dirfrag limit on entries
- Greg Farnum wrote:
> Hmm, I was talking to m0zes (whose situation kicked off this bug) and it turns out the objects ... - 12:41 PM Cleanup #16195 (Resolved): mds: Don't spam log with standby_replay_restart messages
- ...
- 10:25 AM Bug #16066 (Fix Under Review): client: FAILED assert(root_ancestor->qtree == __null)
- 10:24 AM Bug #16066: client: FAILED assert(root_ancestor->qtree == __null)
- https://github.com/ceph/ceph/pull/9591
06/07/2016
- 09:57 PM Bug #16164: mds: enforce a dirfrag limit on entries
- Hmm, I was talking to m0zes (whose situation kicked off this bug) and it turns out the objects actually causing the i...
- 04:15 PM Bug #15266 (Resolved): ceph_volume_client purge failing on non-ascii filenames
- 02:47 PM Bug #16022: MDSMonitor::check_subs() is very buggy
- Abhishek: yes, that should be backported too. It's not strictly necessary but is worthwhile.
- 02:35 PM Bug #16022: MDSMonitor::check_subs() is very buggy
- Does https://github.com/ceph/ceph-qa-suite/pull/1018 need to be packported to Jewel too? Please confirm.
- 02:47 PM Backport #16152 (In Progress): jewel: fs: client: fstat cap release
- 02:44 PM Backport #16136 (In Progress): jewel: MDSMonitor fixes
- 02:41 PM Backport #16135 (In Progress): jewel: MDS: fix getattr starve setattr
- 02:38 PM Backport #16041 (In Progress): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 02:23 PM Backport #15999 (In Progress): jewel: CephFSVolumeClient: read-only authorization for volumes
- 02:06 PM Backport #15898 (In Progress): jewel: Confusing MDS log message when shut down with stalled journ...
- 01:31 PM Backport #15968 (Fix Under Review): jewel: ceph status mds output ignores active MDS when there i...
- https://github.com/ceph/ceph/pull/9547
- 01:28 PM Backport #15971 (Resolved): jewel: ceph_volume_client purge failing on non-ascii filenames
- 01:24 PM Bug #16160: PJD failures on Jewel
- Cool, I'll close this when our nightlies are passing again.
- 01:21 PM Bug #16160: PJD failures on Jewel
- VFS maintainer proposed a fix. The issue should be fixed in 4.7-rc3 kernel
- 01:53 AM Bug #16160: PJD failures on Jewel
- Sorry, I mean 4.7-rc1,it's the newest rc kernel.
06/06/2016
- 08:52 PM Bug #16160: PJD failures on Jewel
- The failing test file is pretty short; the important bit is...
- 12:32 PM Bug #16160: PJD failures on Jewel
- This issue happens only on 3.7.0-rc1 kernel
- 11:41 AM Bug #16160 (Resolved): PJD failures on Jewel
- http://qa-proxy.ceph.com/teuthology/jspray-2016-06-03_06:24:38-fs:basic-jewel---basic-mira/232657/teuthology.log
htt... - 06:49 PM Bug #16164 (In Progress): mds: enforce a dirfrag limit on entries
- 04:36 PM Bug #16164: mds: enforce a dirfrag limit on entries
- I'm taking a look at this one.
- 01:49 PM Bug #16164 (Resolved): mds: enforce a dirfrag limit on entries
- - add a new config option to cap the number of entries in a difrag
- set the limit an order of magnitude higher than... - 11:45 AM Feature #15417 (Resolved): Make path prefix ("/volumes") in CephFSVolumeClient configurable
- 11:44 AM Backport #15854 (Resolved): jewel: Make path prefix ("/volumes") in CephFSVolumeClient configurable
- 11:42 AM Feature #15599 (Resolved): quota: Generate client df from quota, when using subdirectory mount
- 11:38 AM Backport #16065 (Resolved): jewel: quota: Generate client df from quota, when using subdirectory ...
- 11:38 AM Backport #16065: jewel: quota: Generate client df from quota, when using subdirectory mount
- h3. original description
This is an enabler for Manila, creating this ticket to track backport to Jewel after land... - 09:58 AM Bug #16066: client: FAILED assert(root_ancestor->qtree == __null)
- The quota code expects that directory is alway connected to the FS hierarchy. But this is not true, we can create dis...
- 08:55 AM Bug #16137 (Fix Under Review): client: crash in unmount when fuse_use_invalidate_cb is enabled
- https://github.com/ceph/ceph/pull/9509
06/03/2016
- 05:44 PM Backport #16135: jewel: MDS: fix getattr starve setattr
- h3. original description
To backport this fix (https://github.com/ceph/ceph/pull/8965) to Jewel - 10:44 AM Backport #16135 (Resolved): jewel: MDS: fix getattr starve setattr
- https://github.com/ceph/ceph/pull/9560
- 05:38 PM Bug #16154 (Resolved): mds: lock waiters are not finished in the same order that they were added
- https://github.com/ceph/ceph/pull/8965
- 04:54 PM Backport #16152 (Resolved): jewel: fs: client: fstat cap release
- https://github.com/ceph/ceph/pull/9562
- 02:55 PM Cleanup #16144 (Resolved): Remove cephfs-data-scan tmap_upgrade
- This was something that was only for jewel only (the last release to have tmaps in rados).
We can remove it for Kr... - 12:57 PM Bug #9904 (Resolved): Don't crash MDS on clients sending messages with bad seq
- https://github.com/ceph/ceph/pull/9214
- 12:56 PM Bug #8255 (New): mds: directory with missing object cannot be removed
- 12:56 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- I'm reverting state to verified because although we've merged the patch for this, it still needs more attention to ha...
- 12:08 PM Bug #15045 (Resolved): CephFSVolumeClient.evict should be limited by path, not just auth ID
- 12:07 PM Backport #15855 (Resolved): jewel: CephFSVolumeClient.evict should be limited by path, not just a...
- 11:32 AM Bug #16137 (Resolved): client: crash in unmount when fuse_use_invalidate_cb is enabled
- Seen in testing of https://github.com/ceph/ceph/pull/8890...
- 10:51 AM Bug #15723 (Pending Backport): client: fstat cap release
- 10:46 AM Backport #16136 (Resolved): jewel: MDSMonitor fixes
Backported as described : https://github.com/ceph/ceph/pull/9561
06/02/2016
- 10:40 AM Cleanup #15922 (Fix Under Review): MDS: remove TMAP support from CDir
- https://github.com/ceph/ceph/pull/9443
06/01/2016
- 10:15 PM Backport #16065 (In Progress): jewel: quota: Generate client df from quota, when using subdirecto...
- https://github.com/ceph/ceph/pull/9430
- 10:15 PM Backport #15971 (In Progress): jewel: ceph_volume_client purge failing on non-ascii filenames
- https://github.com/ceph/ceph/pull/9430
- 04:57 PM Bug #90: mds: don't sync log on every clientreplay request
- Yeah. So, I think there was a reason for this. We definitely need to flush out any clientreplay requests before exiti...
- 08:26 AM Bug #90: mds: don't sync log on every clientreplay request
The code that flush journal. https://github.com/ceph/ceph/blob/8ce337e3552004fc4853c0c94f33235da4caa5df/src/mds/Ser...- 02:39 AM Feature #3575: ceph-fuse: Add support for forget_multi
- the kernel sends forget_multi request to libfuse even we don't provide the forget_multi callback.
05/31/2016
- 09:30 PM Bug #90: mds: don't sync log on every clientreplay request
- It might be fixed; we should check. But once upon a time (I think maybe still now?) then when in clientreplay set the...
- 11:04 AM Bug #90: mds: don't sync log on every clientreplay request
- Is this definitely still an issue? I'm not seeing a particular place in the code where we do something different wit...
- 09:28 PM Feature #83: mds: rename over old files should flush data or revert to old contents?
- I think it does still exist?
We have an existing foo.conf, inode x
Write to foo.conf.tmp, inode y
rename foo.conf.... - 10:59 AM Feature #83: mds: rename over old files should flush data or revert to old contents?
- The original ticket is describing a real bug (that no longer exists), right? It's not clear to me that there's still...
- 08:01 PM Feature #3575: ceph-fuse: Add support for forget_multi
- Hmm, doesn't the syscall overhead make enough of a difference to be noticeable in some of our bigger invalidate ops?
- 11:43 AM Feature #15067 (Fix Under Review): mon: client: multifs: enable clients to map a filesystem name ...
- https://github.com/ceph/ceph/pull/8386
https://github.com/ceph/ceph-qa-suite/pull/924
- 11:41 AM Feature #7326: qa: fix flock tests
- Me neither. Sage?
- 11:39 AM Feature #2097 (Rejected): mds: 'ceph mds activate <gid>'
- I think we don't.
- 11:39 AM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- I think most or all of the hooks are there, should be a case of taking suites/fs/recovery/tasks/client-limits.yaml an...
- 11:35 AM Feature #7764 (Resolved): InoTable/SessionMap/ manipulator (cephfs-table-tool)
- So cephfs-table-tool has resetting tables, and it has consuming inodes from InoTable.
There probably are more thin... - 11:34 AM Feature #7762 (Rejected): journal-tool: backwards-search after corrupt regions
- Hmm, iirc this ticket was about doing something smarter than scanning forward for a sentinel, where we would potentia...
- 11:19 AM Feature #15400 (Resolved): CephFSVolumeClient should isolate volumes by RADOS namespace
- 11:18 AM Feature #7760 (Resolved): journal-tool: implement splice
- Yep!
- 11:18 AM Feature #7758 (Resolved): journal-tool: complete filtering
- Nope!
- 11:16 AM Feature #7318 (Duplicate): qa: ceph-fuse + sync mode
- Hmm, #4022 seems to be about globally disabling cacher (as an alternative way to test direct IO paths) whereas this o...
- 11:11 AM Backport #16065 (New): jewel: quota: Generate client df from quota, when using subdirectory mount
- Switching back to New because there does not seem to be a pull request for this yet.
- 09:51 AM Backport #16083 (In Progress): jewel: mds: wrongly treat symlink inode as normal file/dir when sy...
- 07:45 AM Backport #16083 (Resolved): jewel: mds: wrongly treat symlink inode as normal file/dir when symli...
- https://github.com/ceph/ceph/pull/9405
- 09:50 AM Backport #16082 (In Progress): hammer: mds: wrongly treat symlink inode as normal file/dir when s...
- 07:45 AM Backport #16082 (Resolved): hammer: mds: wrongly treat symlink inode as normal file/dir when syml...
- https://github.com/ceph/ceph/pull/9404
05/30/2016
- 12:28 PM Feature #3575: ceph-fuse: Add support for forget_multi
- If there is not forget_multi() callback, libfuse calls the forget() callback in a loop. I don't think implementing fo...
05/28/2016
- 10:46 PM Bug #16066: client: FAILED assert(root_ancestor->qtree == __null)
http://pulpito.ceph.com/jspray-2016-05-28_13:42:42-fs-wip-jcsp-testing-20160527b---basic-mira/220171
+other instan...- 10:42 PM Bug #16066 (Resolved): client: FAILED assert(root_ancestor->qtree == __null)
- Seen on test branch where client quota was enabled by default (https://github.com/ceph/ceph/pull/9346) -- presumably ...
- 10:44 PM Bug #16067 (Resolved): client: InvalidWrite in put_qtree
- Like #16066, this seen in test branch with client quota enabled by default.
http://pulpito.ceph.com/jspray-2016-05... - 07:32 AM Cleanup #15922 (In Progress): MDS: remove TMAP support from CDir
- 06:55 AM Backport #16065 (In Progress): jewel: quota: Generate client df from quota, when using subdirecto...
- 06:53 AM Backport #16065 (Resolved): jewel: quota: Generate client df from quota, when using subdirectory ...
- https://github.com/ceph/ceph/pull/9430
- 06:51 AM Feature #15599 (Pending Backport): quota: Generate client df from quota, when using subdirectory ...
05/27/2016
- 11:37 PM Feature #81 (Resolved): mds: do authentication checks
- Hey, we do that now!
- 11:36 PM Feature #3314: client: client interfaces should take a set of group ids
- Sounds like we'll need this for NFS, if nothing else requires it sooner.
- 11:35 PM Feature #3315 (Resolved): client: Add acl support
- Hey, we support POSIX ACLs now, and already have a ticket/code for RichACLs whenever that's done.
- 11:34 PM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- Zheng, do you have any idea what this is about?
- 11:21 PM Feature #83: mds: rename over old files should flush data or revert to old contents?
- I don't suppose POSIX says anything about this...
I think I vote for requiring a flush before renaming over existi... - 11:17 PM Bug #90: mds: don't sync log on every clientreplay request
- This'll probably kill us in a cluster of any size...?
- 11:16 PM Feature #13999: client: richacl support
- 11:15 PM Feature #9880: mds: more gracefully handle EIO on missing dir object
- I think this'll be caught in John's damaged stuff now. Sage, did you have any thing specifically you wanted to see ge...
- 11:02 PM Feature #7318: qa: ceph-fuse + sync mode
- Is this different from #4022?
- 11:00 PM Feature #7322: qa: inline data + thrashing
- We enable inline data, but I'm not sure if it thrashes as well.
- 10:59 PM Feature #7333: client: evaluate multiple O_APPEND writers
- Do we have some known issues with how we handle EOF and synchronous IO?
- 10:57 PM Feature #7758: journal-tool: complete filtering
- Was there something in this that isn't done yet?
- 10:56 PM Feature #7760: journal-tool: implement splice
- This is done now, right?
- 10:56 PM Feature #7762: journal-tool: backwards-search after corrupt regions
- We already have commands to harvest the latest versions of dentries and things, and I think we can punch holes and sk...
- 10:55 PM Feature #7764: InoTable/SessionMap/ manipulator (cephfs-table-tool)
- John, do you think we're done with this based on the table reset commands, or is there more to usefully do?
- 10:50 PM Feature #9466: kclient: Extend CephFSTestCase tests to cover kclient
- What still needs to get done for this, John?
- 10:49 PM Feature #2097: mds: 'ceph mds activate <gid>'
- Do we really want to be able to do this?
- 10:49 PM Feature #1237 (Resolved): mds caps limit mount to some subdir
- We did this last summer! Check out the MDS cephx cap configuration docs.
- 10:37 PM Feature #8636 (Resolved): mds/libcephfs: read only mount
- I believe this is done: we can now restrict clients to read-only caps on the MDS and the OSD.
- 10:35 PM Feature #10393 (Rejected): client: remove mount prefix shenanigans for quota
- Rejecting because we're not sure about subvolumes yet. I think we handle the security stuff properly, but not sure ab...
- 10:34 PM Feature #10392 (Rejected): mds: refactor subvolume vs snaprealm, capture quota trees
- This is a possibility but not something we're sure we want to do.
- 10:00 PM Feature #12671: Enforce cache limit during dirfrag load during open_ino (during rejoin)
- The naive solution to this seems pretty bad as well. If we only load the needed dentries, in a serial fashion, we'll ...
- 09:36 PM Bug #15702 (Pending Backport): mds: wrongly treat symlink inode as normal file/dir when symlink i...
- 09:34 PM Feature #15617: CephFSVolumeClient: OSD blacklisting on deauthorize
- This is a feature, not a bug, right? ;)
- 09:09 PM Feature #603 (Resolved): mds: repair directory hierarchy
- Another fsck ticket that doesn't contribute past what we already have. :)
- 09:08 PM Feature #4145 (Resolved): MDS: design and implement a backwards-scanning fsck
- It looks a little different now, and we have other tickets to improve stuff, but cephfs-data-scan shoudl qualify this...
- 09:07 PM Feature #86 (Resolved): mds: implement fsck
- Not complete, but we have recovery and repair tools. Let's call this ticket done.
http://docs.ceph.com/docs/master/c... - 09:06 PM Feature #4799 (Resolved): Client Security for CephFS
- Done last year! http://docs.ceph.com/docs/master/cephfs/client-auth/
- 05:55 PM Bug #16024 (Fix Under Review): Client: reports that readahead is not working
- https://github.com/ceph/ceph/pull/9374
- 10:16 AM Bug #15921: segfault in cephfs-journal-tool (TestJournalRepair failure)
- And again:
http://pulpito.ceph.com/jspray-2016-05-26_17:59:25-fs-wip-jcsp-testing-20160527---basic-mira/217709 - 09:43 AM Support #16043: MDS is crashed
- Today all MDS in my cluster is died.
05/26/2016
- 01:25 PM Bug #16022: MDSMonitor::check_subs() is very buggy
- Created reproducer in TestMultiFilesystems https://github.com/ceph/ceph-qa-suite/pull/1018
- 09:48 AM Bug #16022 (Pending Backport): MDSMonitor::check_subs() is very buggy
- 12:59 PM Support #16043 (Closed): MDS is crashed
- I updated ceph from hammer to jewel. After restart ceph daemons, 2 mds (from 5) not started.
I execute commands on... - 12:48 PM Bug #16042 (Resolved): MDS Deadlock on shutdown active rank while busy with metadata IO
Report and log here: https://bugzilla.redhat.com/show_bug.cgi?id=1340004
Log looks like one of the MDLog threads...- 12:02 PM Backport #16041 (Resolved): jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- https://github.com/ceph/ceph/pull/9559
- 12:01 PM Backport #16037 (Resolved): jewel: MDSMonitor::check_subs() is very buggy
- https://github.com/ceph/ceph/pull/10103
- 09:55 AM Cleanup #16035 (Resolved): Remove "cephfs" CLI
- IIRC we talked about this in standup the other day, and this pull request (https://github.com/ceph/ceph/pull/9338) re...
- 09:46 AM Bug #15920 (Pending Backport): mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
05/25/2016
- 02:50 PM Bug #16024: Client: reports that readahead is not working
- <spoiler>
- 02:12 PM Bug #16024 (Resolved): Client: reports that readahead is not working
- We've had one or two previous reports that ceph-fuse is reading slowly and that readahead may not be working, but we ...
- 02:32 PM Feature #3426 (New): ceph-fuse: build/run on os x
- Nobody owns it right now, and we never made it official.
- 10:31 AM Feature #3426: ceph-fuse: build/run on os x
- Cleaning up "In progress" tickets: should this be "resolved" or go back to new? I know various people have worked on...
- 02:30 PM Feature #4022: client: qa: test non-cached operation (force sync mode)
- I don't think we have anything right now, no.
- 10:31 AM Feature #4022 (New): client: qa: test non-cached operation (force sync mode)
- Greg: did we end up with a test that corresponds with this ticket or should it still be open?
- 12:30 PM Bug #16022 (Resolved): MDSMonitor::check_subs() is very buggy
- https://github.com/ceph/ceph/pull/9323
- 11:19 AM Feature #16016 (Resolved): Populate DamageTable from forward scrub
- We detect metadata damage two ways: when we try to read things off disk during normal operation, and when we try to v...
- 11:03 AM Bug #11255 (Resolved): nfs: mount failures on ceph-backed NFS share
- The linked PR merged and this is not appearing currently in master.
- 10:32 AM Bug #15900 (Resolved): TestSessionMap.test_mount_conn_close failure: AssertionError: 1 not greate...
- 10:30 AM Bug #7422 (Resolved): client/barrier.h uses boost's interval set library, which is not available ...
- 10:29 AM Bug #9904 (Fix Under Review): Don't crash MDS on clients sending messages with bad seq
- 10:29 AM Feature #11859 (Resolved): MDS "damage table" for recording scrub/fetch errors
- DamageTable went in for Jewel
- 10:28 AM Bug #14195 (Won't Fix): test_full_fclose fails (tasks.cephfs.test_full.TestClusterFull)
- This was only a test race, and it only happens on pathologically slow clusters.
- 10:27 AM Bug #15920 (Fix Under Review): mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())
- 08:39 AM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- Thanks for the fast reply.
I had to restart the MDS server due to pending locks (which also blocked the ceph-fuse ... - 08:07 AM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- The warning is caused by bad inode size for symlink, commit https://github.com/ceph/ceph-client/commit/8e876900de5196...
- 08:00 AM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- another addition:
- mounted cephfs using ceph-fuse
- directory is listed correctly:
# ls -al bin
total 1
drwxr... - 07:39 AM Bug #16013: Failing file operations on kernel based cephfs mount point leaves unaccessible file b...
- Additional information:
- ceph cluster reported client failing to release capabilities prior to the current state
... - 07:28 AM Bug #16013 (Resolved): Failing file operations on kernel based cephfs mount point leaves unaccess...
- After some number of operations on files (which could not be traced for reproduction), we end up with a broken direct...
Also available in: Atom