Activity
From 11/02/2016 to 12/01/2016
12/01/2016
- 11:58 PM Bug #17656: cephfs: high concurrent causing slow request
- william sheng wrote:
> Greg Farnum wrote:
> > Just from the description it sounds like we're backing up while the M... - 08:12 PM Feature #17980: MDS should reject connections from OSD-blacklisted clients
- Related to http://tracker.ceph.com/issues/9754?
- 03:15 PM Bug #18066 (Resolved): objecter dropped op submitted before pool existed
- https://github.com/ceph/ceph/pull/12234
- 03:05 PM Backport #18103 (Resolved): jewel: truncate can cause unflushed snapshot data lose
- https://github.com/ceph/ceph/pull/12324
- 03:04 PM Bug #17982 (Resolved): fuse client failing to trim disconnected inode on unmount
- Fix is merged -- commit 2d02d2c95af9aed31a8579a2245b759f57b3a193.
- 03:03 PM Backport #18100 (Resolved): jewel: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- https://github.com/ceph/ceph/pull/13139
- 03:03 PM Bug #18013 (Resolved): ceph_test_libcephfs access failures when run without ceph-fuse mount in te...
- Fixed in commit 8a887edaffeed66f02f605ee78b9999ffd629a60.
- 02:05 PM Bug #8405 (Duplicate): multimds: FAILED assert(dir->is_frozen_tree_root())
- dup of http://tracker.ceph.com/issues/17606
11/30/2016
- 07:06 PM Bug #18066: objecter dropped op submitted before pool existed
- 01:24 AM Bug #18086: cephfs: fix missing ll_get for ll_walk
- fixed by : https://github.com/ceph/ceph/pull/12061
- 01:24 AM Bug #18086 (Resolved): cephfs: fix missing ll_get for ll_walk
- When exporting a cephfs with nfs-ganesha, segfault is encountered upon releasing file handles using 'systemctl stop n...
11/29/2016
- 08:06 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Thanks. Potential fix is up here:
https://github.com/ceph/ceph/pull/12228
Waiting for build now so I can get a ... - 02:15 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Jeff Layton wrote:
> Thanks. To make sure i understand...
>
> Since the inode is released after the mutex is drop... - 12:44 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Thanks. To make sure i understand...
Since the inode is released after the mutex is dropped, then the refcount dec... - 09:13 AM Bug #17982: fuse client failing to trim disconnected inode on unmount
- This looks like inode reference leak. I found something suspicious ...
- 08:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Potential fix up here. Still needs testing, but it seems to do the right thing on my box:
https://github.com/ceph/... - 06:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Sounds reasonable to me Jeff.
- 04:25 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Actually, now that I look further, it turns out that access.cc looks like it has just the function we need:...
- 03:15 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- I think the right solution is to add a new function for setting the default UserPerm for the cmount, and have those j...
- 05:09 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- RPMs here: http://gitbuilder.ceph.com/ceph-rpm-centos7-x86_64-basic/ref/wip-17837-jewel/x86_64/
- 12:01 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Alexander: I've pushed a backport of this to jewel to a branch called wip-17837-jewel. It will build in an hour or t...
- 11:50 AM Bug #17837 (Pending Backport): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 07:12 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- I could test the changes, do I have to compile this project?
- 02:29 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- From a recent run I see this is only happening on centos NFS servers, so I'm going to pin these tests to ubuntu (but ...
- 01:45 PM Bug #18066 (Resolved): objecter dropped op submitted before pool existed
http://pulpito.ceph.com/teuthology-2016-11-28_17:15:02-fs-master---basic-smithi/583404/
This is manifesting as a...- 12:54 PM Bug #17193 (Pending Backport): truncate can cause unflushed snapshot data lose
- 12:40 PM Feature #12552 (Rejected): qa: test cephfs over cache tier in fs suite
- Cache tiers are deprecated.
- 08:23 AM Bug #18047 (Fix Under Review): assertion in MDSMap::get_up_features()
- https://github.com/ceph/ceph/pull/12208
11/28/2016
- 02:29 PM Feature #11950 (In Progress): Strays enqueued for purge cause MDCache to exceed size limit
- 01:25 PM Feature #18050 (New): Enable mounting a .snap directory from libcephfs/ceph-fuse
Manila is adding a "mountable snapshots" feature:
https://github.com/openstack/manila-specs/blob/master/specs/ocat...- 09:43 AM Bug #18047 (Resolved): assertion in MDSMap::get_up_features()
- ...
- 07:03 AM Feature #12132 (In Progress): cephfs-data-scan: Cleanup phase
- The following xattrs are added to zeroth object:
#define XATTR_CEILING "scan_ceiling"
#define XATTR_MAX_MTIME "... - 01:24 AM Bug #17911 (Resolved): ensure that we vet the ceph_statx flags masks in libcephfs API
- Merged in https://github.com/ceph/ceph/pull/12106
11/24/2016
- 08:35 PM Documentation #18040: Documentation says not to run multiple MDS, but we can do that now
- Merged to master -- I leave it up to you whether you want to backport (I don't usually do that for documentation)
- 07:31 PM Documentation #18040 (Fix Under Review): Documentation says not to run multiple MDS, but we can d...
- https://github.com/ceph/ceph/pull/12184
- 07:28 PM Documentation #18040 (Resolved): Documentation says not to run multiple MDS, but we can do that now
- http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/ has a red box that says "Do not run multiple metad...
- 01:25 PM Bug #18016 (Need More Info): cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- I'll keep posting on this issue whenever I find such an error. If nothing happens in a month or two we can probably f...
- 10:52 AM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- Can't tell what was going on here without logs from the services (which afaik we don't gather in these situations?)
... - 08:10 AM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- https://jenkins.ceph.com/job/ceph-pull-requests/14881/console
- 06:07 AM Bug #18016 (Duplicate): cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- ...
- 10:47 AM Backport #18026 (Resolved): jewel: ceph_volume_client.py : Error: Can't handle arrays of non-strings
- https://github.com/ceph/ceph/pull/12325
- 08:53 AM Bug #17656: cephfs: high concurrent causing slow request
- Greg Farnum wrote:
> Just from the description it sounds like we're backing up while the MDS purges deleted files fr...
11/23/2016
- 09:49 PM Backport #17285 (Resolved): ceph-mon leaks in MDSMonitor when ceph-mds process is running but MDS...
- 08:30 PM Bug #11258 (Resolved): cephfs-java ftruncate unit test failure
- 08:30 PM Backport #13927 (Resolved): hammer: cephfs-java ftruncate unit test failure
- 06:54 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- http://pulpito.ceph.com/teuthology-2016-11-21_17:15:01-fs-master---basic-smithi/567231/
- 06:22 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
Another instance
jspray-2016-11-23_14:06:32-fs-wip-jcsp-testing-20161122-distro-basic-smithi/572367- 06:25 PM Bug #17800 (Pending Backport): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- 03:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- This chmod may be the bit we're missing:
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/cephfs/fuse_mount.p... - 02:56 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Looks like these tests don't specify a UserPerm on the ceph function invocations, nor set up a "global" one. I imagin...
- 02:49 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- BTW, the only material difference between this test run and a normal FS suite run is that ceph-fuse is turned off. Co...
- 02:46 PM Bug #18013 (Resolved): ceph_test_libcephfs access failures when run without ceph-fuse mount in te...
- When ceph-fuse is turned off for the libcephfs_interface_tests task, we see access control failures:...
- 02:41 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- http://pulpito.ceph.com/sage-2016-11-23_14:40:24-upgrade:hammer-x-master---basic-smithi/
- 02:37 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- https://github.com/ceph/ceph-qa-suite/pull/1280
added debugging. the mon connection looks fine, but it isn't doin... - 02:18 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- It's a hammer ceph-fuse trying to mount.. my guess is a compat issue
- 08:05 AM Backport #17956 (In Progress): jewel: Clients without pool-changing caps shouldn't be allowed to ...
- 08:04 AM Backport #18008 (In Progress): jewel: Cannot create deep directories when caps contain "path=/som...
- 06:43 AM Backport #18008 (Resolved): jewel: Cannot create deep directories when caps contain "path=/somepath"
- https://github.com/ceph/ceph/pull/12154
- 07:57 AM Backport #18010 (In Progress): jewel: Cleanly reject "session evict" command when in replay
- 06:43 AM Backport #18010 (Resolved): jewel: Cleanly reject "session evict" command when in replay
- https://github.com/ceph/ceph/pull/12153
11/22/2016
- 11:28 PM Bug #17858 (Pending Backport): Cannot create deep directories when caps contain "path=/somepath"
- 06:24 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 04:40 PM Bug #17990: newly created directory may get fragmented before it gets journaled
- I guess this was getting highlighted because of the timing change in https://github.com/ceph/ceph/pull/12022 (which w...
- 09:49 AM Bug #17990 (Fix Under Review): newly created directory may get fragmented before it gets journaled
- https://github.com/ceph/ceph/pull/12125
- 09:45 AM Bug #17990 (Resolved): newly created directory may get fragmented before it gets journaled
- http://pulpito.ceph.com/jspray-2016-11-18_13:57:54-fs-wip-jcsp-testing-20161118-distro-basic-smithi/559675
- 04:38 PM Backport #17974: jewel: ceph/Client segfaults in handle_mds_map when switching mds
- h3. Original description
Our manila-share daemon is segfaulting when our active mds goes away and we switch to the... - 04:38 PM Backport #17974 (In Progress): jewel: ceph/Client segfaults in handle_mds_map when switching mds
- 03:51 PM Backport #17974 (Fix Under Review): jewel: ceph/Client segfaults in handle_mds_map when switching...
- In jewel there is no call to erase a command from the table after it receives a reply, so if a command has ever been ...
- 03:39 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- One oddity in the check_caps code is that we always try to retain CEPH_CAP_PIN, even when unmounting. I'm going to te...
- 03:07 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- I'll grab this for now since I was in this codepath recently. Looking at it though, I suspect that this is unrelated ...
- 02:11 PM Bug #16924 (Fix Under Review): Crash replaying EExport
- https://github.com/ceph/ceph/pull/12133
- 11:49 AM Bug #17921: CephFS snapshot removal fails with "Stale file handle" error
- Ah, I didn't notice that.
Rama: modify your .yaml configuration to only list one MDS daemon, or make sure that all... - 11:37 AM Bug #17921 (Rejected): CephFS snapshot removal fails with "Stale file handle" error
- you were testing snaphost on multimds setup. It's known broken. For now, please test snapshot only on single active m...
11/21/2016
- 09:57 PM Bug #17982 (Resolved): fuse client failing to trim disconnected inode on unmount
This fuse client is failing to terminate after being unmounted. It has a disconnected inode 10000008bdc from an un...- 08:58 PM Bug #17894 (Resolved): Filesystem removals intermittently failing in qa-suite
- 06:08 PM Feature #17980 (Resolved): MDS should reject connections from OSD-blacklisted clients
- Currently, MDS daemons don't have a blacklist concept: an evicted client is free to reconnect.
Rather than inventi... - 06:05 PM Feature #17979 (New): mds: disable early replies when the MDS has slow RADOS requests
This is a primitive form of flow control.
Currently, if the MDS is experiencing a backlog of RADOS operations, t...- 02:46 PM Backport #17974: jewel: ceph/Client segfaults in handle_mds_map when switching mds
- (the updates to the code in Kraken were ceph-mgr related so if they fixed a bug it was completely accidental!)
- 02:46 PM Backport #17974: jewel: ceph/Client segfaults in handle_mds_map when switching mds
- I wouldn't be surprised if it was fixed in Kraken but I'll look at the Jewel code.
- 11:12 AM Backport #17974 (Resolved): jewel: ceph/Client segfaults in handle_mds_map when switching mds
- https://github.com/ceph/ceph/pull/12137
- 12:54 PM Bug #17801 (Pending Backport): Cleanly reject "session evict" command when in replay
- 12:50 PM Bug #17606: multimds: assertion failure during directory migration
- possibly fix https://github.com/ceph/ceph/pull/12098
- 12:12 PM Bug #17837 (Fix Under Review): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- https://github.com/ceph/ceph/pull/12097
- 09:36 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Thanks Patrick. Indeed wip-17858-jewel resolves this for us. Both mkdir -p and our original untar kernel reproducer i...
11/19/2016
- 05:19 PM Bug #16881 (Can't reproduce): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- This hasn't happened again afaik
- 05:18 PM Feature #17853 (Fix Under Review): More deterministic timing for directory fragmentation
- https://github.com/ceph/ceph/pull/12022
- 04:50 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Dan, give this a try: https://github.com/ceph/ceph/tree/wip-17858-jewel
11/18/2016
- 09:14 PM Bug #17954 (Fix Under Review): standby-replay daemons can sometimes miss events
- https://github.com/ceph/ceph/pull/12077
- 01:55 PM Bug #17954 (Resolved): standby-replay daemons can sometimes miss events
The symptom is that a standby replay daemon gives log messages like "waiting for subtree_map. (skipping " at times...- 06:35 PM Feature #17604 (Fix Under Review): MDSMonitor: raise health warning when there are no standbys bu...
- PR: https://github.com/ceph/ceph/pull/12074
- 03:52 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Sure, I just need to make some adjustments to the patch first. I'll comment here again when I've pushed the branch.
- 02:41 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Err wip-17858-jewel ... you know what I meant ;)
- 02:15 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Thanks Patrick. I'm the user in #17893. If it's not too much trouble, could you push a wip-17893-jewel branch so we c...
- 02:27 AM Bug #17858 (Fix Under Review): Cannot create deep directories when caps contain "path=/somepath"
- PR: https://github.com/ceph/ceph/pull/12063
- 02:53 PM Backport #17956 (Resolved): jewel: Clients without pool-changing caps shouldn't be allowed to cha...
- https://github.com/ceph/ceph/pull/12155
- 02:53 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- I can't immediately remember how synchronous reads on the quota xattrs are meant to be, but they should probably be d...
- 01:25 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- Thanks, indeed that works! I tried *touch /cephfs/foo* and *touch /cephfs2/foo*; in both cases the "wrong" quota was ...
- 01:10 PM Bug #17939 (Need More Info): non-local cephfs quota changes not visible until some IO is done
- I don't think you need a remount (of '/cephfs' in your reproducer) for the new share quota limits to be reflected (fo...
- 02:00 PM Bug #17798 (Pending Backport): Clients without pool-changing caps shouldn't be allowed to change ...
- https://github.com/ceph/ceph/pull/11789
- 02:37 AM Bug #17893: Intermittent permission denied using kernel client with mds path cap
- I agree with Zheng but I can't be 100% sure this is the same issue because the pasted log doesn't have the "SessionMa...
- 02:31 AM Bug #17893 (Duplicate): Intermittent permission denied using kernel client with mds path cap
11/17/2016
- 04:31 PM Bug #17937: file deletion permitted from pool Y from client mount with pool=X capabilities
- John Spray wrote:
> > MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported w... - 04:16 PM Bug #17937 (Won't Fix): file deletion permitted from pool Y from client mount with pool=X capabil...
- > MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported without a correspondin...
- 02:02 PM Bug #17937 (Won't Fix): file deletion permitted from pool Y from client mount with pool=X capabil...
- Not sure whether this is a bug per se, but IMO certainly falls into the strange
behaviour basket.
If I have a fil... - 03:18 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- That commit seems to test client_quota_df. That works, I agree. This issue is about extending a share.
extend_shar... - 02:27 PM Bug #17939: non-local cephfs quota changes not visible until some IO is done
- Hmmm. I remember testing manila's extend share API with cephfs's native driver, where the ceph_volume_client modifies...
- 02:13 PM Bug #17939 (Resolved): non-local cephfs quota changes not visible until some IO is done
- If we change the ceph.quota.max_bytes attribute on a cephfs mount, that quota is not applied until cephfs is remounte...
- 01:31 PM Bug #16886: multimds: kclient hang (?) in tests
- fix one bug https://github.com/ceph/ceph-client/commit/2a3d8aad521306c6537c67c518ea7c4023c74f12
If you see "fail... - 10:29 AM Bug #17837 (In Progress): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Thanks, can now reproduce here. ...
- 06:27 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- here is dump of mdsmap local
- 08:34 AM Bug #17921: CephFS snapshot removal fails with "Stale file handle" error
- please set debug_client=20 and try again
11/16/2016
- 03:38 PM Bug #17906 (Resolved): mds: dumped ops do not include events and other information
- Good point!
- 03:14 PM Bug #17906: mds: dumped ops do not include events and other information
- John, I don't think the bug exists in jewel. The culprit commit is not merged there.
- 12:03 PM Bug #17906 (Pending Backport): mds: dumped ops do not include events and other information
- 02:44 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Originally I could only reproduce with the kernel client. Now I can reproduce it also with ceph-fuse.
- 02:34 PM Bug #17921 (Rejected): CephFS snapshot removal fails with "Stale file handle" error
- Removing CephFS snapshot fails while testing it in a loop with "stale file handle" error but there will not be any sn...
- 12:03 PM Bug #17747 (Resolved): ceph-mds: remove "--journal-check" help text
- 12:03 PM Bug #17797 (Resolved): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
- 12:02 PM Bug #17308 (Resolved): MDSMonitor should tolerate paxos delays without failing daemons (Was: Unex...
- 11:48 AM Bug #17837 (Need More Info): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Hmm, so when I try loading up the mdsmap.bin from http://tracker.ceph.com/issues/16592#change-81117 it is decoding fi...
11/15/2016
- 07:47 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I'll do a run of fs:multifs to see if the bug looks resolved.
- 07:46 PM Bug #17894 (Fix Under Review): Filesystem removals intermittently failing in qa-suite
- 07:45 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- PR: https://github.com/ceph/ceph-qa-suite/pull/1262
- 02:24 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I think your analysis is correct John. I'll write up a fix for that.
- 02:20 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- I'll look at this.
- 02:14 PM Bug #17894: Filesystem removals intermittently failing in qa-suite
- Hmm, too similar to be a coincidence?
http://qa-proxy.ceph.com/teuthology/jspray-2016-11-15_13:27:33-fs-wip-jcsp-t... - 02:11 PM Bug #17914 (New): ObjectCacher doesn't handle snapshot+truncate properly
When truncating file, we should not drop dirty data that were created before snapshot. But it seems ObjectCacher:...- 01:59 PM Bug #17193 (Fix Under Review): truncate can cause unflushed snapshot data lose
- https://github.com/ceph/ceph/pull/11994
- 12:02 PM Bug #17911 (Resolved): ensure that we vet the ceph_statx flags masks in libcephfs API
- Currently, we allow callers to set flags that are not properly defined. This could be bad if we want to add new flags...
- 08:34 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- sorry for misstype, I am ON ceph-fuse
- 08:34 AM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- I am not ceph-fuse
- 04:20 AM Bug #17906 (Fix Under Review): mds: dumped ops do not include events and other information
- PR: https://github.com/ceph/ceph/pull/11985
- 04:19 AM Bug #17906 (Resolved): mds: dumped ops do not include events and other information
- ...
11/14/2016
- 05:25 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- John - it's reliably reproducible :(
Just re-ran again http://pulpito.front.sepia.ceph.com/yuriw-2016-11-14_16:55:4... - 04:46 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Yuri: probably not related unless there's something I'm missing? Zheng was pointing out that the clock in the log se...
- 04:43 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- related to https://github.com/ceph/ceph-qa-suite/pull/1256 ?
- 02:41 PM Bug #17847 (Rejected): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Looks like bad clocks in the test environment
- 05:09 PM Bug #17837 (In Progress): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 05:08 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Note to self, dumps are on http://tracker.ceph.com/issues/16592
- 03:00 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- The fix for #16358 is not complete. It only handles the case that inode is newly created, but does not handle the cas...
- 02:41 PM Bug #17858: Cannot create deep directories when caps contain "path=/somepath"
- Same as http://tracker.ceph.com/issues/17893 ?
Henrik: are you on fuse or kernel client? - 02:57 PM Bug #17893: Intermittent permission denied using kernel client with mds path cap
- dup of #17858
- 01:30 PM Bug #17893 (Duplicate): Intermittent permission denied using kernel client with mds path cap
- See this mailing list thread:
http://www.spinics.net/lists/ceph-users/msg32314.html
It is reported that when usin... - 02:30 PM Bug #17894 (Resolved): Filesystem removals intermittently failing in qa-suite
http://pulpito.ceph.com/teuthology-2016-11-12_17:15:01-fs-master---basic-smithi/543466/
I suspect this is a bug ...- 02:05 PM Bug #17800: ceph_volume_client.py : Error: Can't handle arrays of non-strings
- Thomas: what was the series of operations running up to this? To hit this code path it seems like the user you were ...
- 11:42 AM Bug #17563 (In Progress): extremely slow ceph_fsync calls
- PR for the userland code has been merged, and the kernel patches are in-progress.
11/13/2016
- 12:34 PM Backport #17885 (In Progress): jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- 12:25 PM Backport #17885 (Resolved): jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- https://github.com/ceph/ceph/pull/11953
- 12:24 PM Bug #17832 (Pending Backport): "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- *master PR*: https://github.com/ceph/ceph/pull/11211
This PR just disables the tests. Work on a real fix is being ...
11/12/2016
- 12:23 PM Backport #13927 (In Progress): hammer: cephfs-java ftruncate unit test failure
- 12:22 PM Bug #11258: cephfs-java ftruncate unit test failure
- *master PR*: https://github.com/ceph/ceph/pull/4215
11/11/2016
- 02:21 PM Bug #17800 (Fix Under Review): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- PR for fix: https://github.com/ceph/ceph/pull/11917
- 11:13 AM Bug #17800 (In Progress): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- 11:24 AM Bug #17747 (Fix Under Review): ceph-mds: remove "--journal-check" help text
- Oops, forgot to clean up the manpage as well. Follow-up PR: https://github.com/ceph/ceph/pull/11912
- 11:19 AM Bug #17747 (Resolved): ceph-mds: remove "--journal-check" help text
- 11:05 AM Feature #17853 (In Progress): More deterministic timing for directory fragmentation
- 06:46 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- yes, I've stopped my update and my cluster working now with two mon server.
Perhaps it is helpful, I've a test clu... - 06:46 AM Bug #16592: Jewel: monitor asserts on "mon/MDSMonitor.cc: 2796: FAILED assert(info.state == MDSMa...
- I've created the ticket #17837.
Here ist output from "ceph mds dump --format=json-pretty"...
11/10/2016
- 03:38 PM Bug #17858 (Resolved): Cannot create deep directories when caps contain "path=/somepath"
- ceph-fuse client with having "path=/something" cannot create multiple dirs with mkdir (e.g. mkdir -p 1/2/3/4/5/6/7/8/...
- 02:27 PM Bug #17620: Data Integrity Issue with kernel client vs fuse client
- The jobs failed by incorrectly writing the object to disk. I can recreate this pretty easily by having two clients do...
- 02:13 PM Bug #17620 (Need More Info): Data Integrity Issue with kernel client vs fuse client
- 01:49 PM Bug #17193 (In Progress): truncate can cause unflushed snapshot data lose
- 01:18 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- ...
- 01:16 PM Feature #17856 (Resolved): qa: background cephfs forward scrub teuthology task
- Create a teuthology task that can be run in the background of a workunit, which simply starts a forward scrub on the ...
- 12:03 PM Bug #12895 (Can't reproduce): Failure in TestClusterFull.test_barrier
- 12:02 PM Bug #2277: qa: flock test broken
- For reference: locktest.py still exists in ceph-qa-suite, with a single use from suites/marginal/fs-misc/tasks/lockte...
- 11:51 AM Feature #17855 (Resolved): Don't evict a slow client if it's the only client
There is nothing gained by evicting a client session if there are no other clients who might be held up by it.
T...- 11:50 AM Feature #17854 (Resolved): mds: only evict an unresponsive client when another client wants its caps
Instead of immediately evicting a client when it has not responded within the timeout, set a flag to mark the clien...- 10:45 AM Feature #17853 (Resolved): More deterministic timing for directory fragmentation
This ticket is to track the work to replace the tick()-based consumption of the split queue with some timers, and t...- 10:26 AM Bug #17837 (Need More Info): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Alexander: so hopefully you stopped the upgrade at that point and you still have a working cluster of two hammer mons...
- 03:49 AM Bug #17837 (Duplicate): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- This looks like a duplicate of 16592 but in a new code path: interestingly in a slave monitor.
11/09/2016
- 11:50 PM Bug #4829: client: handling part of MClientForward incorrectly?
- Targeting for Luminous to investigate and either fix or close this.
- 11:50 PM Bug #3267 (Closed): Multiple active MDSes stall when listing freshly created files
- This ticket is old and the use case seems like something we will pick up on from the multimds suite if it's still bro...
- 11:48 PM Feature #3426 (Closed): ceph-fuse: build/run on os x
- 11:47 PM Feature #3426: ceph-fuse: build/run on os x
- I don't think it's likely we will ever "finish" this in the sense of maintaining/testing functionality on OSX, so I'm...
- 11:45 PM Cleanup #660 (Closed): mds: use helpers in mknod, mkdir, openc paths
- 11:31 PM Bug #1511 (Closed): fsstress failure with 3 active mds
- We have fresh tickets for multimds failures as they are reproduced now.
- 11:24 PM Feature #17852 (Resolved): mds: when starting forward scrub, return handle or stamp/version which...
- Enable caller to kick off a scrub and later check completion (may just mean caller has to compare scrub stamps with a...
- 11:21 PM Feature #12274: mds: start forward scrubs from all subtree roots, skip non-auth metadata
- Using this ticket to track the task of implementing cross-MDS forward scrub (i.e. handing off at subtree bounds)
- 11:12 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- Targeting for Luminous and assigning to me: we will use a single Journaler() instance per MDS to track a persistent p...
- 11:00 PM Feature #17770: qa: test kernel client against "full" pools/filesystems
- This is largely http://tracker.ceph.com/issues/17204 , but I'm leaving this ticket here as a convenient way of tracki...
- 05:36 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- For those not familiar with the test, can you explain what versions are in play here, and what version of the client/...
- 04:39 PM Bug #17847 (New): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-11-09_15:03:11-upgrade:hammer-x-jewel-distro-basic-vps/
Job: 534... - 03:10 PM Backport #17841 (In Progress): jewel: mds fails to respawn if executable has changed
- 02:47 PM Backport #17841 (Resolved): jewel: mds fails to respawn if executable has changed
- https://github.com/ceph/ceph/pull/11873
- 02:15 PM Backport #17582 (In Progress): jewel: monitor assertion failure when deactivating mds in (invalid...
- 02:14 PM Backport #17615 (In Progress): jewel: mds: false "failing to respond to cache pressure" warning
- 02:12 PM Backport #17617 (In Progress): jewel: [cephfs] fuse client crash when adding a new osd
- 02:05 PM Bug #17837 (Resolved): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- I've a cluster of three nodes:...
- 01:59 PM Feature #17836 (New): qa: run multi-MDS tests with cache smaller than workload
- Maybe work from my wip-smallcache branch https://github.com/ceph/ceph-qa-suite/tree/wip-smallcache
The goal is to ... - 01:48 PM Bug #16926 (Rejected): multimds: kclient fails to mount
- Looks like this was initially failing when running against an old kernel, and was okay on a recent kernel, so closing.
- 01:43 PM Feature #17835 (Fix Under Review): mds: enable killpoint tests for MDS-MDS subtree export
- ...
- 01:00 PM Backport #17697 (In Progress): jewel: MDS long-time blocked ops. ceph-fuse locks up with getattr ...
- 12:58 PM Backport #17706 (In Progress): jewel: multimds: mds entering up:replay and processing down mds ab...
- 12:57 PM Backport #17720 (In Progress): jewel: MDS: false "failing to respond to cache pressure" warning
- 11:17 AM Feature #17834 (Resolved): MDS Balancer overrides
(Discussion from November 2016 meeting https://docs.google.com/a/redhat.com/document/d/11-O8uHWmOCqyc2_xGIukL0myjnq...- 12:55 AM Bug #17832 (Resolved): "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
- Run: http://pulpito.front.sepia.ceph.com/yuriw-2016-11-08_16:23:29-fs-jewel-distro-basic-smithi/
Jobs: 532365, 53236...
11/08/2016
- 10:50 PM Bug #16640: libcephfs: Java bindings failing to load on CentOS
- And https://github.com/ceph/ceph-qa-suite/pull/1224
- 09:25 PM Bug #17828 (Need More Info): libceph setxattr returns 0 without setting the attr
- Using the jewel libcephfs python bindings I ran the following code snippet:...
- 04:40 PM Bug #17193: truncate can cause unflushed snapshot data lose
- This appears to be intermittent: after ~10 runs without a failure, it's back.
http://qa-proxy.ceph.com/teuthology/... - 02:06 PM Bug #17819: MDS crashed while performing snapshot creation and deletion in a loop
- 016-11-08T06:04:23.498 INFO:tasks.ceph.mds.a.host.stdout:starting mds.a at :/0
2016-11-08T06:04:23.498 INFO:tasks.ce... - 01:22 PM Bug #17819: MDS crashed while performing snapshot creation and deletion in a loop
- The log ought to display exactly what assert in check_cache failed -- can you post that too?
- 11:17 AM Bug #17819 (Can't reproduce): MDS crashed while performing snapshot creation and deletion in a loop
- --- begin dump of recent events ---
-1> 2016-11-08 11:04:27.309191 7f4a57c38700 1 -- 10.8.128.73:6812/8588 <== ... - 01:41 PM Bug #17522 (Resolved): ceph_readdirplus_r does not acquire caps before sending back attributes
- Fixed as of commit f7028e48936cb70f623fe7ba408708a403e60270. This requires moving to libcephfs2, however.
- 01:37 PM Feature #16419 (Resolved): add statx-like interface to libcephfs
- This should now be done for the most part. New ceph_statx interfaces have been merged into kraken. This change is not...
- 01:36 PM Feature #3314 (Resolved): client: client interfaces should take a set of group ids
- Now merged as of commit c078dc0daa9c50621f7252559a97bfb191244ca1. libcephfs has been revved to version 2, which is no...
11/07/2016
- 02:29 PM Bug #17799: cephfs-data-scan: doesn't know how to handle files with pool_namespace layouts
- the backtrace objects are always in default namespace. I think data scan tool can't calculate correct size for files ...
- 02:25 PM Bug #17801 (Fix Under Review): Cleanly reject "session evict" command when in replay
- https://github.com/ceph/ceph/pull/11813
- 11:43 AM Bug #17193 (Resolved): truncate can cause unflushed snapshot data lose
- This is no longer failing when running against the testing kernel.
11/04/2016
- 02:32 PM Bug #17531 (Pending Backport): mds fails to respawn if executable has changed
- 02:13 PM Bug #17801 (Resolved): Cleanly reject "session evict" command when in replay
Currently we crash like this (from ceph-users):...- 02:07 PM Fix #15134 (Fix Under Review): multifs: test case exercising mds_thrash for multiple filesystems
- PR adding support to mds_thrash.py: https://github.com/ceph/ceph-qa-suite/pull/1175
Need to check if we have a tes... - 02:06 PM Feature #10792 (In Progress): qa: enable thrasher for MDS cluster size (vary max_mds)
- Pre-requisite PR: https://github.com/ceph/ceph-qa-suite/pull/1175
multimds testing with the thrasher will be added... - 01:41 PM Bug #17800 (Resolved): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- When using Ceph (python-cephfs-10.2.3+git.1475228057.755cf99 , SLE12SP2) together with OpenStack Manila and trying to...
- 01:18 PM Bug #17799 (New): cephfs-data-scan: doesn't know how to handle files with pool_namespace layouts
Not actually sure how we currently behave.
Do we see the data objects and inject files with incorrect layouts? ...- 01:06 PM Bug #17798 (Resolved): Clients without pool-changing caps shouldn't be allowed to change pool_nam...
The purpose of the 'p' flag in MDS client auth caps is to enable creating clients that cannot set the pool part of ...- 01:01 PM Bug #17797 (Fix Under Review): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
- https://github.com/ceph/ceph/pull/11783
- 11:05 AM Bug #17797 (Resolved): rmxattr on ceph.[dir|file].layout.pool_namespace doesn't work
Currently it's obvious how to set the namespace but much less so how to clear it (i.e. revert to default namespace)...
11/02/2016
- 08:22 AM Bug #17747 (Fix Under Review): ceph-mds: remove "--journal-check" help text
- *master PR*: https://github.com/ceph/ceph/pull/11739
Also available in: Atom