Activity
From 03/21/2016 to 04/19/2016
04/19/2016
- 02:43 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- I was able to make this happen with kernel mode as well. Does that tunable noted in #6 need to be implemented for fu...
- 07:24 AM Bug #15502 (Need More Info): files read or written with cephfs (fuse or kernel) on client drop al...
- 07:30 AM Bug #15045: CephFSVolumeClient.evict should be limited by path, not just auth ID
- tested by https://github.com/ceph/ceph-qa-suite/pull/968
04/18/2016
- 12:52 PM Feature #13999: client: richacl support
- code is done; waiting for richacl get merged into upstream
- 12:49 PM Feature #13998 (Resolved): posix acl support
- https://github.com/ceph/ceph/pull/5658
- 12:45 PM Feature #11950 (Fix Under Review): Strays enqueued for purge cause MDCache to exceed size limit
- https://github.com/ceph/ceph/pull/8582
- 11:30 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- wip-zyan-testing branch includes that fix. http://gitbuilder.ceph.com/ceph-rpm-centos7-x86_64-basic/ref/wip-zyan-test...
- 08:22 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Are there rpms builded for this pr? I looked at http://gitbuilder.ceph.com/ceph-rpm-centos7-x86_64-basic/ but I did n...
- 07:37 AM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- I can't reproduce this on 3.10.0-327.el7 kernel mount. To make ceph-fuse keeps the kernel pagecache, you need to set ...
04/15/2016
- 09:32 PM Bug #15169 (Rejected): have trouble in mount.ceph in precise
- This is an old ticket so you've probably already worked this out, but this was failing because your kernel was too old.
- 09:31 PM Bug #15168 (Duplicate): have trouble in mount.ceph
- http://tracker.ceph.com/issues/15169#change-69275
- 02:30 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- Kernel is 3.10.0-327.el7.x86_64 ie the GA kernel for RHEL7.2
- 01:59 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- This was on both ceph-fuse and a recent-ish rhel (7.2? 7.3-prerelease?) kernel. Unless there's some extra thing in th...
- 01:33 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
This should be fixed in upstream, Barry, which version of RHEL kernel do you use?- 12:29 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- Similar to http://tracker.ceph.com/issues/13640#change-66216
- 12:43 PM Feature #15507: MDS: support "watching" an inode/dentry
- Not a million miles away from http://tracker.ceph.com/projects/ceph/wiki/Live_Performance_Probes
- 07:12 AM Backport #15512 (Resolved): hammer: Double decreased the count to trim caps which will cause fail...
- https://github.com/ceph/ceph/pull/8804
- 03:19 AM Bug #15508: client: simultaneous readdirs are very racy
- Hmm, I think the end result would be pretty much the same, although just having an array might be simpler. A pointer ...
- 03:12 AM Bug #15508: client: simultaneous readdirs are very racy
- Another option is assign dentry a cache index and use array to track the dentry list. If the shared_gen hasn't change...
04/14/2016
- 11:06 PM Bug #13271: Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)
- This may get handled alongside http://tracker.ceph.com/issues/15508.
- 11:06 PM Bug #15508: client: simultaneous readdirs are very racy
- Some obvious solutions are disqualified, both because we can't really track what directory listing's are in progress ...
- 11:01 PM Bug #15508 (Resolved): client: simultaneous readdirs are very racy
- Imagine we have a ceph-fuse user doing readdirs a and b on a very large directory (which requires multiple MDS round-...
- 10:49 PM Bug #11792: mds: recursive statistics are either inaccurate or too "chunky"
- This is going to pop up for users. Let's be proactive about checking and fixing it if possible.
- 10:45 PM Bug #13906 (Closed): pjd failure on hammer
- I think maybe this got backported after all? If not, it hasn't come up again, so closing it.
- 10:44 PM Bug #13107 (Resolved): ceph-fuse: handle multiple mounts on the same host (don't set nonce with o...
- This got fixed when #13032 did!
- 10:41 PM Bug #14319 (Pending Backport): Double decreased the count to trim caps which will cause failing t...
- Whoops, this got merged in way back then in January!
Looks like no backport got scheduled or anything; we'll see i... - 10:28 PM Bug #14716 (Won't Fix): "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- This was a result of the OSD full handling changes, which got partly backported. I think the resolution we ended up a...
- 10:25 PM Bug #14759 (Closed): ovh: failed snaptest-git-ceph.sh test
- It hasn't popped up again.
- 10:24 PM Bug #14685: dbench hang on native cifs mount
- Apparently still happening? http://qa-proxy.ceph.com/teuthology/teuthology-2016-04-02_23:14:02-samba-master---basic-s...
- 10:19 PM Bug #15235 (Closed): MDS : erroneous error message about reading config file
- There's a conversation on that PR.
- 10:03 PM Feature #15507 (New): MDS: support "watching" an inode/dentry
- It would be great if we could monitor all client access to a specific file. Obviously we can't check for each access ...
- 09:58 PM Bug #15270 (Resolved): "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- 09:58 PM Bug #15270: "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- changed @smoke@ suite to use kernel @testing@
- 09:48 PM Bug #15270: "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- I think anything running kernel tests needs to run with our testing kernel, not the distro kernel that happens to be ...
- 09:51 PM Bug #15045 (Fix Under Review): CephFSVolumeClient.evict should be limited by path, not just auth ID
- https://github.com/ceph/ceph/pull/8602
- 09:47 PM Feature #15506 (Resolved): qa: run at least one upgrade test in the FS suite
- We've broken upgrading at least twice during the Jewel freeze/RC period. They would all have been detected if the PRs...
- 09:36 PM Bug #15399 (In Progress): MDS incarnation get lost after remove filesystem
- For Jewel: https://github.com/ceph/ceph/pull/8484
But a more comprehensive one (that works with pools shared betwe... - 09:28 PM Bug #11314: qa: MDS crashed and the runs hung without ever timing out
- We clearly aren't treating this as very important, and I think we've had more trouble with OSDs doing this than MDSes...
- 09:27 PM Feature #14642: Validate layouts everywhere we load them
- Okay, let's put this in as a feature.
- 09:26 PM Bug #15502: files read or written with cephfs (fuse or kernel) on client drop all their page cach...
- Zheng, can you look at this? Hopefully we just have a bad cap transition on the server or something.
- 05:36 PM Bug #15502 (Resolved): files read or written with cephfs (fuse or kernel) on client drop all thei...
- Testing cephfs file system I/O with early jewel bits (ceph-10.0.4-1.el7cp.x86_64) on:
RHEL72 client mounting a cep... - 09:23 PM Bug #9994 (Resolved): ceph-qa-suite: nfs mount timeouts
- Well, this hasn't been updated in a while, and the tests I really remember failing were all on OVH, which was resolve...
- 09:22 PM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- Haven't seen this elsewhere, and no logs.
- 09:20 PM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- Just waiting on backports, FS team doesn't need to worry about those.
- 09:19 PM Bug #15402 (Resolved): Failure in TestMultiFilesystems.test_standby_for_fscid
- 09:18 PM Bug #14608: snaptests.yaml failure: [WRN] open_snap_parents has:" in cluster log
- Still waiting to reproduce with logs. :(
- 09:18 PM Bug #15303 (Resolved): Client holds incorrect complete flag on dir after losing caps
- 09:28 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- I send a RP that make MDS not pin strays in memory
could you take a try:
https://github.com/ukernel/ceph/tree/inf...
04/13/2016
- 03:37 PM Bug #15485 (Duplicate): drop /usr/bin/cephfs
- Greg tells me that /usr/bin/cephfs is deprecated. We should rip it out post-Jewel.
04/12/2016
- 03:43 AM Bug #15467: After "mount -l", ceph-fuse does not work
- - mount -l
+ unmount -l - 03:39 AM Bug #15467 (Won't Fix): After "mount -l", ceph-fuse does not work
- Since MDS stopped working because of running out ram, I did "mount -l" to unmount cephfs but did not restart the clie...
- 01:54 AM Bug #15465: MDSAuthCap parse fails on paths with hyphens
- https://github.com/ceph/ceph/pull/8546
- 01:51 AM Bug #15465 (Resolved): MDSAuthCap parse fails on paths with hyphens
04/11/2016
- 09:34 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- Possibly. I'm concerned about exposing slow deletes to users via Manila, but it may be the best we can do in the shor...
- 06:39 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- Yep, this should be a fairly high priority to do something about.
The "real" solution (a scalable way of persistin... - 05:26 PM Feature #11950: Strays enqueued for purge cause MDCache to exceed size limit
- This came up in #15379. I think we're going to start seeing it more often with the Manila use case...
- 02:45 PM Bug #15402 (Fix Under Review): Failure in TestMultiFilesystems.test_standby_for_fscid
- https://github.com/ceph/ceph/pull/8536
- 12:39 PM Bug #15402: Failure in TestMultiFilesystems.test_standby_for_fscid
- Did you assign this because you had an idea about it? Otherwise I'm also happy to work on it
- 11:01 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- When doing a number of deletes greater than the MDS cache size, you need to make sure that the MDS is purging faster ...
- 08:37 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- I did this on friday, I wait to let the mds delete until the evening. I did restart it friday evening, and saturday ...
- 08:27 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Can you start the MDS without resuming rm? If mds can start, please wait a few hours, to make MDS delete enough stra...
- 07:18 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Still not able to resume the rm normally.. Even with the added RAM (now 64+32 GB), this is not enough for the MDS.
... - 08:42 AM Bug #15449 (Can't reproduce): [ceph-mds] mds service can not start after shutdown in 10.1.0
- Hi cephers,
I was testing CephFS's HA. So I shutdown the active mds server.
Then the one of standby mds turn to b... - 07:49 AM Support #15268: CephFS mount blocks VM
- Sorry for the delay. This seems like duplicate of http://tracker.ceph.com/issues/15302
04/08/2016
- 01:07 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Adding a swapfile of 32GB did make it work.. for now.
Thanks!!
Our mds daemon is running, but now using 73G of ...
04/07/2016
- 06:03 PM Bug #15402: Failure in TestMultiFilesystems.test_standby_for_fscid
- /a/sage-2016-04-07_08:34:50-fs-wip-sage-testing---basic-smithi/114003
- 04:02 PM Feature #15417 (Resolved): Make path prefix ("/volumes") in CephFSVolumeClient configurable
- This would be a convenient way to enable e.g. multiple Manila driver instances ("backends") to use different data poo...
- 03:04 PM Feature #15400 (Fix Under Review): CephFSVolumeClient should isolate volumes by RADOS namespace
- https://github.com/ceph/ceph/pull/8474/files
- 10:33 AM Bug #15304 (Resolved): qa: samba tests need to run on testing kernel
- 08:21 AM Bug #15304 (Fix Under Review): qa: samba tests need to run on testing kernel
- https://github.com/ceph/ceph-qa-suite/pull/938
- 08:46 AM Bug #15309 (Resolved): Error: Creation of multiple filesystems is disabled. To enable this exper...
- 07:02 AM Bug #15399: MDS incarnation get lost after remove filesystem
- I think using the MDSMap epoch as the incarnation is good idea
- 01:17 AM Bug #15387 (Resolved): legacy pool 0 might actually mean pool 0.
04/06/2016
- 04:53 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Ah, so this is basically http://tracker.ceph.com/issues/11950
The MDS doesn't limit how many stray files (waiting ... - 03:35 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- ...
- 09:22 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- you can enable coredump by following commands
#ulimit -c unlimited
#ceph-mds -f -i a -c /etc/ceph/ceph.conf
A... - 07:33 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Thanks, I uploaded the file:
ceph-post-file: 99c6e33a-7e54-472f-847b-68400954bbe4
I don't see any coredumps. - 02:59 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- are there coredump files? If there are, run gdb ceph-mds core.xxx. It will give us exact location of crash.
- 02:54 PM Feature #15406 (Resolved): Add versioning to CephFSVolumeClient interface
As we add features and change things in CephFSVolumeClient, we will need to add a mechanism for users of the module...- 02:51 PM Bug #15399: MDS incarnation get lost after remove filesystem
- We only do objecter->set_client_incarnation(incarnation); in MDSRank::init (after we've been assigned an active role)...
- 02:12 PM Bug #15399: MDS incarnation get lost after remove filesystem
- So we could probably reset our network connections with an incarnation based on the last MDSMap where our role change...
- 02:09 PM Bug #15399: MDS incarnation get lost after remove filesystem
- If:
* MDSes A and B come up during the same epoch
* A becomes active and B becomes standby
* A fails
* B starts r... - 10:56 AM Bug #15399: MDS incarnation get lost after remove filesystem
- Here's a reproducer for the incarnation issue:
https://github.com/ceph/ceph-qa-suite/tree/wip-15399
I note that w... - 09:12 AM Bug #15399 (Resolved): MDS incarnation get lost after remove filesystem
- If we remove a filesystem, then create new filesystem with old data/metadata pools. OSD may drop requests from MDS of...
- 01:15 PM Bug #15402 (Resolved): Failure in TestMultiFilesystems.test_standby_for_fscid
- ...
- 09:24 AM Feature #15400 (Resolved): CephFSVolumeClient should isolate volumes by RADOS namespace
i.e. use the new RADOS namespace layouts for the volume's dir, and then limit the client caps to that namespace.- 09:23 AM Cleanup #12191 (In Progress): Remove ceph-mds --journal-check aka ONESHOT_REPLAY
- 08:40 AM Bug #15387 (Fix Under Review): legacy pool 0 might actually mean pool 0.
- https://github.com/ceph/ceph/pull/8466
04/05/2016
- 09:15 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- We were having some temporary infrastructure issues, can you try again? :)
- 02:41 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Hmm, I tried this, but I can't upload it : (I added verbose output)
sftp> mkdir post/780bd87b-b262-46a3-90a9-98f68... - 01:58 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- It's probably best to include the whole file and upload with ceph-post-file. :)
- 01:26 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- with debug 20
- 12:56 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Always get Request Entity Too Large, even for one MB.. is there other way to include this ?
It works with the last ... - 12:53 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Did not succeed. Tried with last 100000 lines
- 12:51 PM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- I ran it with debug 10, total file after crash was 10G. I kept the last 1 million lines, I hope i'm able to add it
- 11:41 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Hmm, you're getting heartbeatmap warnings which indicate that something in the MDS is blocking for much longer than i...
- 11:20 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Added the whole log file of the original crash
- 11:11 AM Bug #15379: ceph mds continiously crashes and going into laggy state (stray purging problems)
- Further up the log (before the minus-prefixed line numbers) there should be info from the actual failed assertion
- 08:12 AM Bug #15379 (Closed): ceph mds continiously crashes and going into laggy state (stray purging prob...
- We are removing a very large directory from cephFS on infernalis 9.2.1, and after a while, an MDS was in laggy state ...
- 08:15 PM Feature #15393: ceph-fuse: Request for logrotate for client side log files
- This is a little weird since ceph-fuse doesn't have a service controlling it, which I think is how we normally plug t...
- 06:35 PM Feature #15393 (Resolved): ceph-fuse: Request for logrotate for client side log files
- While running cephfs with fuse, I needed to enable debug mode on the client. I quickly discovered its quite easy to ...
- 05:53 PM Bug #15266 (Fix Under Review): ceph_volume_client purge failing on non-ascii filenames
- https://github.com/ceph/ceph/pull/8452
https://github.com/ceph/ceph-qa-suite/pull/934 - 03:35 PM Bug #15387 (Resolved): legacy pool 0 might actually mean pool 0.
- https://github.com/ceph/ceph/blob/master/src/common/fs_types.cc#L63
This is the case for old clusters, where teh p... - 07:21 AM Bug #15378 (Rejected): ceph_volume_client: hasty removal of OSD caps during deauthorization
- When deauthorizing access of a Ceph user to a volume (a CephFS directory),
the volume client removes OSD caps that a... - 02:05 AM Bug #13980 (Resolved): all nfs v3 mounts fail in ovh lab
04/04/2016
- 10:08 PM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- Hi Greg,
i can confirm, that restarting the mds will solve the issue. Thank you very much for that advice !
If... - 05:18 PM Bug #14818 (Resolved): Cython librados broke libcephfs/ceph_volume_client
- This got fixed when cephfs.py was converted to cython
04/01/2016
- 09:02 PM Bug #15270: "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- Run: http://pulpito.ceph.com/teuthology-2016-04-01_08:43:11-smoke-master-distro-basic-mira/
Jobs: ['101644', '101645'] - 08:08 PM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- That's probably a result of using older kernel clients in your case, Oliver. There's no max file size/count which wil...
- 05:14 PM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- Hi,
same here:
with clients:
3.10.0-327.10.1.el7.x86_64 = centos 7
3.19.0-25-generic #26~14.04.1-Ubuntu SM...
03/31/2016
- 02:53 PM Bug #15045 (In Progress): CephFSVolumeClient.evict should be limited by path, not just auth ID
- 02:27 PM Bug #13876 (Resolved): qa: openstack MPI connection failures
- The firewall on the OVH lab was configured manually. The code that is quoted is only used when dynamically provisioni...
- 02:18 PM Bug #13980 (Fix Under Review): all nfs v3 mounts fail in ovh lab
- https://github.com/ceph/teuthology/pull/834
- 10:29 AM Bug #15309 (Fix Under Review): Error: Creation of multiple filesystems is disabled. To enable th...
- https://github.com/ceph/ceph/pull/8393
- 05:16 AM Bug #15309: Error: Creation of multiple filesystems is disabled. To enable this experimental fea...
- first bit passes, but now it fails a bit further on:
2016-03-30T15:06:45.122 INFO:tasks.workunit.client.0.smithi04...
03/30/2016
- 11:54 PM Feature #15264: libcephfs: enable non-"ll" users to set their uid/gid
- Hmm, is setting config options really the interface we want for it? It's easy enough for the moment but I'm not sure ...
- 12:11 PM Feature #15264 (Rejected): libcephfs: enable non-"ll" users to set their uid/gid
- Oh, never mind, I was looking in the libcephfs API (where you can't set it), but not in the config opts (where you ca...
- 06:18 PM Bug #15309: Error: Creation of multiple filesystems is disabled. To enable this experimental fea...
- 10:45 AM Bug #15309 (Fix Under Review): Error: Creation of multiple filesystems is disabled. To enable th...
- https://github.com/ceph/ceph/pull/8372
- 01:31 PM Bug #15266 (In Progress): ceph_volume_client purge failing on non-ascii filenames
- 01:10 PM Bug #15045: CephFSVolumeClient.evict should be limited by path, not just auth ID
- I am working on this issue.
03/29/2016
- 05:35 PM Bug #15309: Error: Creation of multiple filesystems is disabled. To enable this experimental fea...
- Could be failing because default pool names changed, so this looks like a "create second" rather than "it already exi...
- 05:17 PM Bug #15309 (Resolved): Error: Creation of multiple filesystems is disabled. To enable this exper...
- 2016-03-28T14:22:24.439 INFO:tasks.rest_api.client.rest0.smithi037.stderr:127.0.0.1 - - [28/Mar/2016 21:22:24] "PUT /...
- 01:59 PM Bug #15303 (Fix Under Review): Client holds incorrect complete flag on dir after losing caps
- https://github.com/ceph/ceph/pull/8353
- 01:33 PM Bug #15303 (In Progress): Client holds incorrect complete flag on dir after losing caps
- Client::handle_cap_grant() pass wrong parameter to Client::check_cap_issue()
- 01:29 PM Bug #15303: Client holds incorrect complete flag on dir after losing caps
- Reproducer is https://github.com/ceph/ceph-qa-suite/pull/917
- 11:07 AM Bug #15303 (Resolved): Client holds incorrect complete flag on dir after losing caps
We saw this from testing Manila. The symptom is that the Manila driver gets ENOTEMPTY when trying to remove a shar...- 01:50 PM Bug #15304 (Resolved): qa: samba tests need to run on testing kernel
- I think some of our "random" samba failures are not so random. It looks like we no longer pass on any of our kclient ...
- 12:51 PM Bug #15270: "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- the test use 3.13 kernel, maybe the failure was caused by unsupported feature.
- 08:33 AM Support #15268: CephFS mount blocks VM
- Zheng Yan wrote:
> grab_super+0x2e is down_write(&s->s_umount). the backtrace show there is already a mounted cephfs... - 02:43 AM Support #15268: CephFS mount blocks VM
- grab_super+0x2e is down_write(&s->s_umount). the backtrace show there is already a mounted cephfs, you tried mounting...
03/28/2016
- 08:48 PM Bug #14805 (Resolved): Hadoop tests failing with EPERM
- 01:54 PM Support #15268: CephFS mount blocks VM
- Joao Castro wrote:
> Trying to mount it on a EC2 machine
>
> Mar 27 10:37:25 ip-172-31-5-103 kernel: [ 1320.75613...
03/27/2016
- 10:41 AM Support #15268: CephFS mount blocks VM
- Trying to mount it on a EC2 machine
Mar 27 10:37:25 ip-172-31-5-103 kernel: [ 1320.756137] INFO: task mount.ceph:1...
03/25/2016
- 08:20 AM Backport #15281 (Rejected): infernalis: standy-replay MDS does not cleanup finished replay threads
- 01:02 AM Feature #15065 (Resolved): multifs: add standby_for_fscid setting on MDS and pass in MMDSBeacon
- 12:57 AM Feature #15063 (Resolved): multifs: option to disable sanity() calls
- 12:57 AM Feature #15063: multifs: option to disable sanity() calls
- commit:b296629f1ace2b2c394e9303b17e952fabb06696, merged in f474c23d7c619fe014bd4cf42cab12802f1e4e2c
- 12:57 AM Fix #15062 (Resolved): multifs cleanup: pass feature bits into MDSMap from Filesystem::encode
- commit:68398258e152803cf9166a5bc3c0b1e153062d1e, merged in f474c23d7c619fe014bd4cf42cab12802f1e4e2c
- 12:45 AM Bug #15167 (Resolved): CID 1355575: null CDir* can get pushed on ScrubStack?
- 12:44 AM Bug #15210 (Rejected): qa: snaptests-0.sh: file exists error after deleting+reusing name
- I thought each of these was running in their own subdirectory, but if that's not the case then this is expected behav...
03/24/2016
- 11:44 PM Support #15268: CephFS mount blocks VM
- Hmm, if it were newer features in userspace than in the kernel you should be getting errors. Zheng, any idea?
- 11:42 PM Support #15268: CephFS mount blocks VM
- Joao Castro wrote:
> Greg Farnum wrote:
> > You'll need to provide a little more context. You're just trying to mou... - 11:41 PM Support #15268: CephFS mount blocks VM
- Greg Farnum wrote:
> You'll need to provide a little more context. You're just trying to mount CephFS inside of a VM... - 10:33 PM Support #15268: CephFS mount blocks VM
- You'll need to provide a little more context. You're just trying to mount CephFS inside of a VM, and the mount is han...
- 03:47 PM Support #15268 (Resolved): CephFS mount blocks VM
- cat /proc/modules | grep -i ceph
ceph 315392 0 - Live 0xffffffffc0460000
libceph 241664 1 ceph, Live 0xffffffffc041... - 10:59 PM Bug #15235: MDS : erroneous error message about reading config file
- Ok I didn't know it could be related. No problem in this case :)
- 10:52 PM Bug #15235 (Fix Under Review): MDS : erroneous error message about reading config file
- 10:51 PM Bug #15235: MDS : erroneous error message about reading config file
- Oh. I gather you're running into #14144, given https://github.com/ceph/ceph/pull/7060#issuecomment-200745759. The "un...
- 10:36 PM Bug #15235: MDS : erroneous error message about reading config file
- Yes I know, but is ends like this... :(
- 10:26 PM Bug #15235: MDS : erroneous error message about reading config file
- That paste doesn't contain an error of any kind in it. :) It seems to end with the respawn command getting invoked, b...
- 09:06 AM Bug #15235: MDS : erroneous error message about reading config file
- It should respawn itself, but fails on reading configuration file. But as you can see in strace, it does not *try* to...
- 12:17 AM Bug #15235: MDS : erroneous error message about reading config file
- If you're failing an MDS from the monitor it's respawning itself, so maybe something weird happened there. Or maybe i...
- 10:27 PM Bug #14805: Hadoop tests failing with EPERM
- d'oh, right. Okay, I get the problem now. I've run this through a couple of times in my latest integration branch, bt...
- 12:09 PM Bug #14805: Hadoop tests failing with EPERM
- Greg Farnum wrote:
> Wait, can you expand on that John? I wasn't really looking at the python tests, although I know... - 01:04 AM Bug #14805: Hadoop tests failing with EPERM
- Wait, can you expand on that John? I wasn't really looking at the python tests, although I know it involved root owne...
- 10:20 PM Bug #14144 (Pending Backport): standy-replay MDS does not cleanup finished replay threads
- 04:47 PM Bug #15270 (Resolved): "mount error 5 = Input/output error" in smoke-master-distro-basic-openstack
- Run: http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-03-24_05:00:02-smoke-master-distro-basic-openstack/
Jobs... - 03:04 PM Bug #15266 (Resolved): ceph_volume_client purge failing on non-ascii filenames
- ...
- 01:59 PM Feature #15264 (Rejected): libcephfs: enable non-"ll" users to set their uid/gid
- Currently libcephfs mostly doesn't pass through a uid/gid to Client, and Client defaults to reading the uid/gid of th...
- 12:18 AM Bug #15008 (Resolved): fuse expects root inode number to be FUSE_ROOT_ID
03/23/2016
- 05:26 PM Feature #15252 (New): client: support fadvise DONTNEED
- This can apply to both our own caching, and be passed through to the OSDs. DONTNEED is great because it prevents poll...
03/22/2016
- 12:58 PM Bug #15235 (Closed): MDS : erroneous error message about reading config file
- Hi,
I have a bug on Infernalis with MDS.
When a MDS is failing and going to standby mode (ceph mds fail X), it ... - 12:10 PM Feature #15065 (Fix Under Review): multifs: add standby_for_fscid setting on MDS and pass in MMDS...
- https://github.com/ceph/ceph/pull/8257
- 01:56 AM Bug #15210: qa: snaptests-0.sh: file exists error after deleting+reusing name
- two instance of snaptest-0.sh were executed in root directory at the same time, that's why we saw EEXIST.
- 01:09 AM Bug #15210: qa: snaptests-0.sh: file exists error after deleting+reusing name
- Oh duh. I am having serious trouble setting up the qa-suite in a way that makes this test and all the others happy.
...
03/21/2016
Also available in: Atom