Activity
From 10/21/2016 to 11/19/2016
11/19/2016
- 05:19 PM Bug #16881 (Can't reproduce): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- This hasn't happened again afaik
- 05:18 PM Feature #17853 (Fix Under Review): More deterministic timing for directory fragmentation
- https://github.com/ceph/ceph/pull/12022
- 04:50 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Dan, give this a try: https://github.com/ceph/ceph/tree/wip-17858-jewel
11/18/2016
- 09:14 PM Bug #17954 (Fix Under Review): standby-replay daemons can sometimes miss events
- https://github.com/ceph/ceph/pull/12077
- 01:55 PM Bug #17954 (Resolved): standby-replay daemons can sometimes miss events
The symptom is that a standby replay daemon gives log messages like "waiting for subtree_map. (skipping " at times...- 06:35 PM Feature #17604 (Fix Under Review): MDSMonitor: raise health warning when there are no standbys bu...
- PR: https://github.com/ceph/ceph/pull/12074
- 03:52 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Sure, I just need to make some adjustments to the patch first. I'll comment here again when I've pushed the branch.
- 02:41 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Err wip-17858-jewel ... you know what I meant ;)
- 02:15 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Thanks Patrick. I'm the user in #17893. If it's not too much trouble, could you push a wip-17893-jewel branch so we c...
- 02:27 AM Bug #17858 (Fix Under Review): Cannot create deep directories when caps contain "path=/somepath"
- PR: https://github.com/ceph/ceph/pull/12063
- 02:53 PM Backport #17956 (Resolved): jewel: Clients without pool-changing caps shouldn't be allowed to cha...
- https://github.com/ceph/ceph/pull/12155
- 02:53 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- I can't immediately remember how synchronous reads on the quota xattrs are meant to be, but they should probably be d...
- 01:25 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- Thanks, indeed that works! I tried *touch /cephfs/foo* and *touch /cephfs2/foo*; in both cases the "wrong" quota was ...
- 01:10 PM Bug #17939 (Need More Info): non-local cephfs quota changes not visible until some IO is done
- I don't think you need a remount (of '/cephfs' in your reproducer) for the new share quota limits to be reflected (fo...
- 02:00 PM Bug #17798 (Pending Backport): Clients without pool-changing caps shouldn't be allowed to change ...
- https://github.com/ceph/ceph/pull/11789
- 02:37 AM Bug #17893: Intermittent permission denied using kernel client with mds path cap
- I agree with Zheng but I can't be 100% sure this is the same issue because the pasted log doesn't have the "SessionMa...
- 02:31 AM Bug #17893 (Duplicate): Intermittent permission denied using kernel client with mds path cap
11/17/2016
- 04:31 PM Bug #17937: file deletion permitted from pool Y from client mount with pool=X capabilities
- John Spray wrote:
> > MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported w... - 04:16 PM Bug #17937 (Won't Fix): file deletion permitted from pool Y from client mount with pool=X capabil...
- > MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported without a correspondin...
- 02:02 PM Bug #17937 (Won't Fix): file deletion permitted from pool Y from client mount with pool=X capabil...
- Not sure whether this is a bug per se, but IMO certainly falls into the strange
behaviour basket.
If I have a fil... - 03:18 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- That commit seems to test client_quota_df. That works, I agree. This issue is about extending a share.
extend_shar... - 02:27 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- Hmmm. I remember testing manila's extend share API with cephfs's native driver, where the ceph_volume_client modifies...
- 02:13 PM Bug #17939 (Resolved): non-local cephfs quota changes not visible until some IO is done
- If we change the ceph.quota.max_bytes attribute on a cephfs mount, that quota is not applied until cephfs is remounte...
- 01:31 PM Bug #16886: multimds: kclient hang (?) in tests
- fix one bug https://github.com/ceph/ceph-client/commit/2a3d8aad521306c6537c67c518ea7c4023c74f12
If you see "fail... - 10:29 AM Bug #17837 (In Progress): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Thanks, can now reproduce here. ...
- 06:27 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- here is dump of mdsmap local
- 08:34 AM Bug #17921: CephFS snapshot removal fails with "Stale file handle" error
- please set debug_client=20 and try again
11/16/2016
- 03:38 PM Bug #17906 (Resolved): mds: dumped ops do not include events and other information
- Good point!
- 03:14 PM Bug #17906: mds: dumped ops do not include events and other information
- John, I don't think the bug exists in jewel. The culprit commit is not merged there.
- 12:03 PM Bug #17906 (Pending Backport): mds: dumped ops do not include events and other information
- 02:44 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Originally I could only reproduce with the kernel client. Now I can reproduce it also with ceph-fuse.
- 02:34 PM Bug #17921 (Rejected): CephFS snapshot removal fails with "Stale file handle" error
- Removing CephFS snapshot fails while testing it in a loop with "stale file handle" error but there will not be any sn...
- 12:03 PM Bug #17747 (Resolved): ceph-mds: remove "--journal-check" help text
- 12:03 PM Bug #17797 (Resolved): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
- 12:02 PM Bug #17308 (Resolved): MDSMonitor should tolerate paxos delays without failing daemons (Was: Unex...
- 11:48 AM Bug #17837 (Need More Info): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Hmm, so when I try loading up the mdsmap.bin from http://tracker.ceph.com/issues/16592#change-81117 it is decoding fi...
11/15/2016
- 07:47 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I'll do a run of fs:multifs to see if the bug looks resolved.
- 07:46 PM Bug #17894 (Fix Under Review): Filesystem removals intermittently failing in qa-suite
- 07:45 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- PR: https://github.com/ceph/ceph-qa-suite/pull/1262
- 02:24 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I think your analysis is correct John. I'll write up a fix for that.
- 02:20 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I'll look at this.
- 02:14 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- Hmm, too similar to be a coincidence?
http://qa-proxy.ceph.com/teuthology/jspray-2016-11-15_13:27:33-fs-wip-jcsp-t... - 02:11 PM Bug #17914 (New): ObjectCacher doesn't handle snapshot+truncate properly
When truncating file, we should not drop dirty data that were created before snapshot. But it seems ObjectCacher:...- 01:59 PM Bug #17193 (Fix Under Review): truncate can cause unflushed snapshot data lose
- https://github.com/ceph/ceph/pull/11994
- 12:02 PM Bug #17911 (Resolved): ensure that we vet the ceph_statx flags masks in libcephfs API
- Currently, we allow callers to set flags that are not properly defined. This could be bad if we want to add new flags...
- 08:34 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- sorry for misstype, I am ON ceph-fuse
- 08:34 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- I am not ceph-fuse
- 04:20 AM Bug #17906 (Fix Under Review): mds: dumped ops do not include events and other information
- PR: https://github.com/ceph/ceph/pull/11985
- 04:19 AM Bug #17906 (Resolved): mds: dumped ops do not include events and other information
- ...
11/14/2016
- 05:25 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- John - it's reliably reproducible :(
Just re-ran again http://pulpito.front.sepia.ceph.com/yuriw-2016-11-14_16:55:4... - 04:46 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Yuri: probably not related unless there's something I'm missing? Zheng was pointing out that the clock in the log se...
- 04:43 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- related to https://github.com/ceph/ceph-qa-suite/pull/1256 ?
- 02:41 PM Bug #17847 (Rejected): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Looks like bad clocks in the test environment
- 05:09 PM Bug #17837 (In Progress): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 05:08 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Note to self, dumps are on http://tracker.ceph.com/issues/16592
- 03:00 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- The fix for #16358 is not complete. It only handles the case that inode is newly created, but does not handle the cas...
- 02:41 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Same as http://tracker.ceph.com/issues/17893 ?
Henrik: are you on fuse or kernel client? - 02:57 PM Bug #17893: Intermittent permission denied using kernel client with mds path cap
- dup of #17858
- 01:30 PM Bug #17893 (Duplicate): Intermittent permission denied using kernel client with mds path cap
- See this mailing list thread:
http://www.spinics.net/lists/ceph-users/msg32314.html
It is reported that when usin... - 02:30 PM Bug #17894 (Resolved): Filesystem removals intermittently failing in qa-suite
http://pulpito.ceph.com/teuthology-2016-11-12_17:15:01-fs-master---basic-smithi/543466/
I suspect this is a bug ...- 02:05 PM Bug #17800: ceph_volume_client.py : Error: Can't handle arrays of non-strings
- Thomas: what was the series of operations running up to this? To hit this code path it seems like the user you were ...
- 11:42 AM Bug #17563 (In Progress): extremely slow ceph_fsync calls
- PR for the userland code has been merged, and the kernel patches are in-progress.
11/13/2016
- 12:34 PM Backport #17885 (In Progress): jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- 12:25 PM Backport #17885 (Resolved): jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- https://github.com/ceph/ceph/pull/11953
- 12:24 PM Bug #17832 (Pending Backport): "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- *master PR*: https://github.com/ceph/ceph/pull/11211
This PR just disables the tests. Work on a real fix is being ...
11/12/2016
- 12:23 PM Backport #13927 (In Progress): hammer: cephfs-java ftruncate unit test failure
- 12:22 PM Bug #11258: cephfs-java ftruncate unit test failure
- *master PR*: https://github.com/ceph/ceph/pull/4215
11/11/2016
- 02:21 PM Bug #17800 (Fix Under Review): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- PR for fix: https://github.com/ceph/ceph/pull/11917
- 11:13 AM Bug #17800 (In Progress): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- 11:24 AM Bug #17747 (Fix Under Review): ceph-mds: remove "--journal-check" help text
- Oops, forgot to clean up the manpage as well. Follow-up PR: https://github.com/ceph/ceph/pull/11912
- 11:19 AM Bug #17747 (Resolved): ceph-mds: remove "--journal-check" help text
- 11:05 AM Feature #17853 (In Progress): More deterministic timing for directory fragmentation
- 06:46 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- yes, I've stopped my update and my cluster working now with two mon server.
Perhaps it is helpful, I've a test clu... - 06:46 AM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- I've created the ticket #17837.
Here ist output from "ceph mds dump --format=json-pretty"...
11/10/2016
- 03:38 PM Bug #17858 (Resolved): Cannot create deep directories when caps contain "path=/somepath"
- ceph-fuse client with having "path=/something" cannot create multiple dirs with mkdir (e.g. mkdir -p 1/2/3/4/5/6/7/8/...
- 02:27 PM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- The jobs failed by incorrectly writing the object to disk. I can recreate this pretty easily by having two clients do...
- 02:13 PM Bug #17620 (Need More Info): Data Integrity Issue with kernel client vs fuse client
- 01:49 PM Bug #17193 (In Progress): truncate can cause unflushed snapshot data lose
- 01:18 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- ...
- 01:16 PM Feature #17856 (Resolved): qa: background cephfs forward scrub teuthology task
- Create a teuthology task that can be run in the background of a workunit, which simply starts a forward scrub on the ...
- 12:03 PM Bug #12895 (Can't reproduce): Failure in TestClusterFull.test_barrier
- 12:02 PM Bug #2277: qa: flock test broken
- For reference: locktest.py still exists in ceph-qa-suite, with a single use from suites/marginal/fs-misc/tasks/lockte...
- 11:51 AM Feature #17855 (Resolved): Don't evict a slow client if it's the only client
There is nothing gained by evicting a client session if there are no other clients who might be held up by it.
T...- 11:50 AM Feature #17854 (Resolved): mds: only evict an unresponsive client when another client wants its caps
Instead of immediately evicting a client when it has not responded within the timeout, set a flag to mark the clien...- 10:45 AM Feature #17853 (Resolved): More deterministic timing for directory fragmentation
This ticket is to track the work to replace the tick()-based consumption of the split queue with some timers, and t...- 10:26 AM Bug #17837 (Need More Info): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Alexander: so hopefully you stopped the upgrade at that point and you still have a working cluster of two hammer mons...
- 03:49 AM Bug #17837 (Duplicate): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- This looks like a duplicate of 16592 but in a new code path: interestingly in a slave monitor.
11/09/2016
- 11:50 PM Bug #4829: client: handling part of MClientForward incorrectly?
- Targeting for Luminous to investigate and either fix or close this.
- 11:50 PM Bug #3267 (Closed): Multiple active MDSes stall when listing freshly created files
- This ticket is old and the use case seems like something we will pick up on from the multimds suite if it's still bro...
- 11:48 PM Feature #3426 (Closed): ceph-fuse: build/run on os x
- 11:47 PM Feature #3426: ceph-fuse: build/run on os x
- I don't think it's likely we will ever "finish" this in the sense of maintaining/testing functionality on OSX, so I'm...
- 11:45 PM Cleanup #660 (Closed): mds: use helpers in mknod, mkdir, openc paths
- 11:31 PM Bug #1511 (Closed): fsstress failure with 3 active mds
- We have fresh tickets for multimds failures as they are reproduced now.
- 11:24 PM Feature #17852 (Resolved): mds: when starting forward scrub, return handle or stamp/version which...
- Enable caller to kick off a scrub and later check completion (may just mean caller has to compare scrub stamps with a...
- 11:21 PM Feature #12274: mds: start forward scrubs from all subtree roots, skip non-auth metadata
- Using this ticket to track the task of implementing cross-MDS forward scrub (i.e. handing off at subtree bounds)
- 11:12 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- Targeting for Luminous and assigning to me: we will use a single Journaler() instance per MDS to track a persistent p...
- 11:00 PM Feature #17770: qa: test kernel client against "full" pools/filesystems
- This is largely http://tracker.ceph.com/issues/17204 , but I'm leaving this ticket here as a convenient way of tracki...
- 05:36 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- For those not familiar with the test, can you explain what versions are in play here, and what version of the client/...
- 04:39 PM Bug #17847 (New): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-11-09_15:03:11-upgrade:hammer-x-jewel-distro-basic-vps/
Job: 534... - 03:10 PM Backport #17841 (In Progress): jewel: mds fails to respawn if executable has changed
- 02:47 PM Backport #17841 (Resolved): jewel: mds fails to respawn if executable has changed
- https://github.com/ceph/ceph/pull/11873
- 02:15 PM Backport #17582 (In Progress): jewel: monitor assertion failure when deactivating mds in (invalid...
- 02:14 PM Backport #17615 (In Progress): jewel: mds: false "failing to respond to cache pressure" warning
- 02:12 PM Backport #17617 (In Progress): jewel: [cephfs] fuse client crash when adding a new osd
- 02:05 PM Bug #17837 (Resolved): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- I've a cluster of three nodes:...
- 01:59 PM Feature #17836 (New): qa: run multi-MDS tests with cache smaller than workload
- Maybe work from my wip-smallcache branch https://github.com/ceph/ceph-qa-suite/tree/wip-smallcache
The goal is to ... - 01:48 PM Bug #16926 (Rejected): multimds: kclient fails to mount
- Looks like this was initially failing when running against an old kernel, and was okay on a recent kernel, so closing.
- 01:43 PM Feature #17835 (Fix Under Review): mds: enable killpoint tests for MDS-MDS subtree export
- ...
- 01:00 PM Backport #17697 (In Progress): jewel: MDS long-time blocked ops. ceph-fuse locks up with getattr ...
- 12:58 PM Backport #17706 (In Progress): jewel: multimds: mds entering up:replay and processing down mds ab...
- 12:57 PM Backport #17720 (In Progress): jewel: MDS: false "failing to respond to cache pressure" warning
- 11:17 AM Feature #17834 (Resolved): MDS Balancer overrides
(Discussion from November 2016 meeting https://docs.google.com/a/redhat.com/document/d/11-O8uHWmOCqyc2_xGIukL0myjnq...- 12:55 AM Bug #17832 (Resolved): "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-11-08_16:23:29-fs-jewel-distro-basic-smithi/
Jobs: 532365, 53236...
11/08/2016
- 10:50 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
- And https://github.com/ceph/ceph-qa-suite/pull/1224
- 09:25 PM Bug #17828 (Need More Info): libceph setxattr returns 0 without setting the attr
- Using the jewel libcephfs python bindings I ran the following code snippet:...
- 04:40 PM Bug #17193: truncate can cause unflushed snapshot data lose
- This appears to be intermittent: after ~10 runs without a failure, it's back.
http://qa-proxy.ceph.com/teuthology/... - 02:06 PM Bug #17819: MDS crashed while performing snapshot creation and deletion in a loop
- 016-11-08T06:04:23.498 INFO:tasks.ceph.mds.a.host.stdout:starting mds.a at :/0
2016-11-08T06:04:23.498 INFO:tasks.ce... - 01:22 PM Bug #17819: MDS crashed while performing snapshot creation and deletion in a loop
- The log ought to display exactly what assert in check_cache failed -- can you post that too?
- 11:17 AM Bug #17819 (Can't reproduce): MDS crashed while performing snapshot creation and deletion in a loop
- --- begin dump of recent events ---
-1> 2016-11-08 11:04:27.309191 7f4a57c38700 1 -- 10.8.128.73:6812/8588 <== ... - 01:41 PM Bug #17522 (Resolved): ceph_readdirplus_r does not acquire caps before sending back attributes
- Fixed as of commit f7028e48936cb70f623fe7ba408708a403e60270. This requires moving to libcephfs2, however.
- 01:37 PM Feature #16419 (Resolved): add statx-like interface to libcephfs
- This should now be done for the most part. New ceph_statx interfaces have been merged into kraken. This change is not...
- 01:36 PM Feature #3314 (Resolved): client: client interfaces should take a set of group ids
- Now merged as of commit c078dc0daa9c50621f7252559a97bfb191244ca1. libcephfs has been revved to version 2, which is no...
11/07/2016
- 02:29 PM Bug #17799: cephfs-data-scan: doesn't know how to handle files with pool_namespace layouts
- the backtrace objects are always in default namespace. I think data scan tool can't calculate correct size for files ...
- 02:25 PM Bug #17801 (Fix Under Review): Cleanly reject "session evict" command when in replay
- https://github.com/ceph/ceph/pull/11813
- 11:43 AM Bug #17193 (Resolved): truncate can cause unflushed snapshot data lose
- This is no longer failing when running against the testing kernel.
11/04/2016
- 02:32 PM Bug #17531 (Pending Backport): mds fails to respawn if executable has changed
- 02:13 PM Bug #17801 (Resolved): Cleanly reject "session evict" command when in replay
Currently we crash like this (from ceph-users):...- 02:07 PM Fix #15134 (Fix Under Review): multifs: test case exercising mds_thrash for multiple filesystems
- PR adding support to mds_thrash.py: https://github.com/ceph/ceph-qa-suite/pull/1175
Need to check if we have a tes... - 02:06 PM Feature #10792 (In Progress): qa: enable thrasher for MDS cluster size (vary max_mds)
- Pre-requisite PR: https://github.com/ceph/ceph-qa-suite/pull/1175
multimds testing with the thrasher will be added... - 01:41 PM Bug #17800 (Resolved): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- When using Ceph (python-cephfs-10.2.3+git.1475228057.755cf99 , SLE12SP2) together with OpenStack Manila and trying to...
- 01:18 PM Bug #17799 (New): cephfs-data-scan: doesn't know how to handle files with pool_namespace layouts
Not actually sure how we currently behave.
Do we see the data objects and inject files with incorrect layouts? ...- 01:06 PM Bug #17798 (Resolved): Clients without pool-changing caps shouldn't be allowed to change pool_nam...
The purpose of the 'p' flag in MDS client auth caps is to enable creating clients that cannot set the pool part of ...- 01:01 PM Bug #17797 (Fix Under Review): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
- https://github.com/ceph/ceph/pull/11783
- 11:05 AM Bug #17797 (Resolved): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
Currently it's obvious how to set the namespace but much less so how to clear it (i.e. revert to default namespace)...
11/02/2016
- 08:22 AM Bug #17747 (Fix Under Review): ceph-mds: remove "--journal-check" help text
- *master PR*: https://github.com/ceph/ceph/pull/11739
11/01/2016
- 05:16 PM Bug #17563: extremely slow ceph_fsync calls
- PR to fix the userland side of things is here:
https://github.com/ceph/ceph/pull/11710 - 01:10 PM Bug #17563: extremely slow ceph_fsync calls
- It looks like ceph-fuse has the same problem with fsync. Here's a POSIX API reproducer that shows similar improvement...
- 03:53 PM Bug #17115 (Resolved): kernel panic when running IO with cephfs and resource pool becomes full
- http://tracker.ceph.com/issues/17770
- 03:53 PM Feature #17770 (New): qa: test kernel client against "full" pools/filesystems
- We test the uclient against full pools to validate behavior. We discovered in #17115 that we don't for the kernel cli...
- 03:49 PM Bug #17240 (Closed): inode_permission error with kclient when running client IO with recovery ope...
- When the RADOS cluster has blocked IO, the kernel client is going to have blocked IO. That's just life. :(
- 03:37 PM Bug #17656 (Need More Info): cephfs: high concurrent causing slow request
- Just from the description it sounds like we're backing up while the MDS purges deleted files from RADOS. You can adju...
- 03:08 PM Bug #7750: Attempting to mount a kNFS export of a sub-directory of a CephFS filesystem fails with...
- NFS-Utils-1.2.8
NFS server on Ubuntu 16.04 - 03:06 PM Bug #7750: Attempting to mount a kNFS export of a sub-directory of a CephFS filesystem fails with...
- I still can't reproduce it. This does not seem like kernel issue. which version of nfs-utils do you use?
- 12:13 PM Bug #7750: Attempting to mount a kNFS export of a sub-directory of a CephFS filesystem fails with...
- I've reproduced this bug in CephFS jewel. Can't mount via NFS NON root CephFS dir.
Get error on client 'Stale NFS fi... - 09:24 AM Bug #17747: ceph-mds: remove "--journal-check" help text
- You should use the cephfs-journal-tool for dealing with this stuff now. The journal-check oneshot-replay mode got rem...
- 08:39 AM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- I'm not familiar with docker,how did the jobs failed.(what's the symptom)
10/31/2016
- 06:29 PM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- Thanks for that. I'm working on getting to the point where I can test that.
In the meantime further testing has i... - 05:05 PM Bug #4212: mds: open_snap_parents isn't called all the times it needs to be
- Having all past_parents open is hard because of dir renames. Say you do
/a/b/c
and snapshot /a/b, then rename... - 02:06 PM Bug #17747: ceph-mds: remove "--journal-check" help text
- running ceph-mds -d -i ceph --journal-check 0 gives me following output:
--conf/-c FILE read configuration fr... - 09:51 AM Bug #17747 (Resolved): ceph-mds: remove "--journal-check" help text
- Hi,
running ceph-mds -d -i ceph --journal-check 0 gives me following input:
--conf/-c FILE read configurat...
10/28/2016
- 05:31 PM Bug #17563: extremely slow ceph_fsync calls
- Ok, thanks. That makes sense. I've got a patchset that works as a PoC, but it's pretty ugly and could use some cleanu...
- 11:23 AM Bug #17548 (Resolved): should userland ceph_llseek do permission checking?
- Fixed in commit db2e7e0811679b4c284e105536ebf3327cc02ffc.
- 10:35 AM Bug #17731 (Can't reproduce): MDS stuck in stopping with other rank's strays
Kraken v11.0.2
Seen on a max_mds=2 MDS cluster with a fuse client doing an rsync -av --delete on a dir that incl...
10/27/2016
- 07:50 AM Backport #17720 (Resolved): jewel: MDS: false "failing to respond to cache pressure" warning
- https://github.com/ceph/ceph/pull/11856
10/26/2016
- 01:37 PM Bug #17562 (Resolved): backtrace check fails when scrubbing directory created by fsstress
- 12:44 PM Bug #17716 (Resolved): MDS: false "failing to respond to cache pressure" warning
- Creating this ticket for a PR that went in without a ticket on it so that we can backport.
https://github.com/ceph... - 07:56 AM Backport #17705: jewel: ceph_volume_client: recovery of partial auth update is broken
- h3. previous description
I run into the following traceback when the volume_client tries
to recover from partia... - 07:18 AM Backport #17705: jewel: ceph_volume_client: recovery of partial auth update is broken
- https://github.com/ceph/ceph/pull/11656
https://github.com/ceph/ceph-qa-suite/pull/1221 - 05:21 AM Backport #17705 (In Progress): jewel: ceph_volume_client: recovery of partial auth update is broken
- 05:20 AM Backport #17705 (Resolved): jewel: ceph_volume_client: recovery of partial auth update is broken
- https://github.com/ceph/ceph/pull/11656
- 07:07 AM Backport #17706 (Resolved): jewel: multimds: mds entering up:replay and processing down mds aborts
- https://github.com/ceph/ceph/pull/11857
- 05:16 AM Bug #17216 (Pending Backport): ceph_volume_client: recovery of partial auth update is broken
10/25/2016
- 07:53 PM Bug #17670 (Pending Backport): multimds: mds entering up:replay and processing down mds aborts
- 01:46 PM Backport #17697 (Resolved): jewel: MDS long-time blocked ops. ceph-fuse locks up with getattr of ...
- https://github.com/ceph/ceph/pull/11858
- 01:42 PM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- I have fixed a bug that may cause this issue. could you have a try https://github.com/ceph/ceph-client/commits/testing
- 11:16 AM Bug #17275 (Pending Backport): MDS long-time blocked ops. ceph-fuse locks up with getattr of file
- 11:12 AM Bug #17691 (Resolved): bad backtrace on inode
- Merged https://github.com/ceph/ceph-qa-suite/pull/1218
- 10:46 AM Bug #17691 (In Progress): bad backtrace on inode
- Sorry, that's happening because I merged my backtrace repair PR before the ceph-qa-suite piece, so the log message is...
- 03:08 AM Bug #17691 (Resolved): bad backtrace on inode
- Seen this in testing:
http://pulpito.ceph.com/pdonnell-2016-10-25_02:25:11-fs:recovery-master---basic-mira/493889/...
10/24/2016
- 05:49 PM Bug #17563: extremely slow ceph_fsync calls
- The client is waiting for an ack to a cap *flush*, not to get caps granted. Usually flushes happen asynchronously (ju...
- 05:36 PM Bug #17563: extremely slow ceph_fsync calls
- OTOH...do we even need a flag at all here? Under what circumstances is it beneficial to delay granting and recalling ...
- 01:45 PM Feature #17639 (Resolved): Repair file backtraces during forward scrub
10/22/2016
- 10:52 PM Bug #17670 (Fix Under Review): multimds: mds entering up:replay and processing down mds aborts
- https://github.com/ceph/ceph/pull/11611
- 10:46 PM Bug #17670 (Resolved): multimds: mds entering up:replay and processing down mds aborts
- ...
10/21/2016
- 09:48 PM Feature #17249 (Resolved): cephfs tool for finding files that use named PGs
- 01:58 PM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- I suspect the zeros are from stale page cache data. If you encounter the issue again, please drop the kernel page cac...
- 01:39 PM Bug #17275 (Fix Under Review): MDS long-time blocked ops. ceph-fuse locks up with getattr of file
- https://github.com/ceph/ceph/pull/11593
- 01:21 PM Bug #17275: MDS long-time blocked ops. ceph-fuse locks up with getattr of file
- created http://tracker.ceph.com/issues/17660
- 01:03 PM Bug #17275: MDS long-time blocked ops. ceph-fuse locks up with getattr of file
- got a getattr long lock, but this time client has long-running objecter requests. I will be filling a ticket for that...
- 05:33 AM Bug #17656: cephfs: high concurrent causing slow request
- William, could you describe the issue from the Ceph's perspective in detail?
- 01:46 AM Bug #17656 (Need More Info): cephfs: high concurrent causing slow request
- background:
we use cephfs as CDN backend, when CDN vendor prefetch video files in cephfs, it will cause high concu...
Also available in: Atom