Activity
From 09/07/2015 to 10/06/2015
10/06/2015
- 06:00 PM Bug #13364: LibCephFS locking tests are failing and/or lockdep asserting
- 02:19 PM Support #13267 (Closed): mds heap stats cause warn message but working
- Unless there are other symptoms, this log warning is not a problem. It's an indication that a client disconnected, wh...
10/05/2015
- 04:09 PM Bug #13364 (Resolved): LibCephFS locking tests are failing and/or lockdep asserting
- Run: http://qa-proxy.ceph.com/teuthology/teuthology-2015-10-04_05:00:08-smoke-master-distro-basic-multi
Job: 1088065...
10/02/2015
- 09:11 PM Bug #13334: delayed revoke warning in test_client_recovery test
- Oh, it's even simpler. Can just switch the order of locker->tick and server->find_idle_sessions to get rid of this b...
- 09:04 PM Bug #13334: delayed revoke warning in test_client_recovery test
- Yeah, this is racy because mds_session_timeout is 60s, and so is the threshold for emitting that warning.
Actually... - 05:59 AM Bug #13334 (Resolved): delayed revoke warning in test_client_recovery test
- http://pulpito.ceph.com/teuthology-2015-09-29_23:04:01-fs-infernalis---basic-multi/1077881/...
- 08:16 AM Bug #12297 (Resolved): ceph-fuse 0.94.2-1trusty segfaults / aborts
10/01/2015
- 11:14 AM Feature #13316 (Closed): cephfs-data-scan: delete unlinked file objects
- Oh, never mind. I just realised that we never have this case, because during scan_extents we'll always create the 0t...
- 11:09 AM Feature #13316 (Closed): cephfs-data-scan: delete unlinked file objects
Two types:
1 Non-0th data objects who have no 0th object
2 0th data objects that were not tagged in the latest fo...- 05:35 AM Support #13267: mds heap stats cause warn message but working
- still waiting for solution.
09/30/2015
- 09:42 PM Documentation #13311: explain user permission syntax, details
- object_prefix is not mentioned in this doc at all
- 09:36 PM Documentation #13311: explain user permission syntax, details
- joshd notes that the ceph-authtool manpage has more explanation; maybe at least a "see also", but perhaps some of the...
- 09:33 PM Documentation #13311 (Resolved): explain user permission syntax, details
- AFAICT http://docs.ceph.com/docs/master/rados/operations/user-management/ doesn't explain:
# for an OSD, 'class-re... - 05:39 AM Support #13055: Problem with disconnect fuse by mds
- thx it works.
09/29/2015
- 03:02 PM Bug #13268: Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- I have cloned https://github.com/ceph/samba.git but it doesn't work with secondary groups
- 01:58 PM Bug #13268: Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- You mean the development version of SAMBA? Which version?
- 01:42 PM Bug #13268: Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- secondary groups should work if you use the newest developing code. I'am working on posix ACL supporting.
- 06:58 AM Bug #13268 (Resolved): Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- Tested with a Windows 7 & 2008 client.
When I'm using the ceph_vfs module in SAMBA it seems secondary groups are n... - 02:17 PM Bug #13271: Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)
- I think this is 62dd63761701a7e0f7ce39f4071dcabc19bb1cf4, which doesn't appear to be in a release yet. See #12297.
... - 02:02 PM Bug #13271 (New): Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)
- The original reporter confirmed that he saw this with 0.94
- 01:22 PM Bug #13271: Missing dentry in cache when doing readdirs under cache pressure (?????s in ls-l)
- it's fixed by a5984ba34cb684dae623df22e338f350c8765ba5. updating v0.87.1 should fix this problem
- 01:09 PM Bug #13271 (Closed): Missing dentry in cache when doing readdirs under cache pressure (?????s in ...
- above circumstance does not suppose to happen. because dirp->buffer holds references to inodes.
I found it's bug i... - 10:53 AM Bug #13271 (Resolved): Missing dentry in cache when doing readdirs under cache pressure (?????s i...
Fuse sends us a series of readdirs, because it only accepts so many entries in each. We only do an MDS request on ...- 10:54 AM Feature #13272 (New): Run fuse client workloads under cache pressure
We should run at least some workloads in situations where we set client_cache_size to something fairly low, so that...- 06:44 AM Support #13267 (Closed): mds heap stats cause warn message but working
- when im using command ceph mds.node2(or any of mds nodes) heap stats - i see output of memory usage,but in a log file...
- 05:25 AM Bug #11783: protocol: flushing caps on MDS restart can go bad
- I guess I should note that I only saw this the once and it also included a (slightly outdated) version of the vstart ...
- 05:16 AM Bug #11783: protocol: flushing caps on MDS restart can go bad
- I merged this by checking that it's working manually, but the testing isn't behaving properly so I haven't merged tha...
- 05:21 AM Bug #13167 (Resolved): mds: replay gets stuck (on out-of-order journal replies?)
09/28/2015
- 01:33 PM Bug #13256 (Fix Under Review): I/O error with cephfs accessing root .snap directory on v9.0.3
- https://github.com/ceph/ceph/pull/6095
- 11:52 AM Feature #13259 (New): Option to disable always writing backtraces to the default data pool
- Currently, this is how we deal with hardlinks vs. layouts specifying non-default data pools: files in the non-default...
09/26/2015
- 05:07 PM Bug #13256 (Resolved): I/O error with cephfs accessing root .snap directory on v9.0.3
- I am running a test Ceph cluster using Ceph v9.0.3 with all Kernels at 4.2.0 on Ubuntu Trusty. I have enabled snapsho...
09/25/2015
- 02:03 PM Support #13211 (Closed): profiler and getting some memory info with it
- I think this got handled in irc.
- 01:39 PM Support #13211: profiler and getting some memory info with it
- there is only root, no other users. even i make it from localhost there is same error - 1 mds.0.39 handle_command: re...
- 08:24 AM Support #13211: profiler and getting some memory info with it
- Greg Farnum wrote:
> Apparently you're not using a client with enough caps on the MDS to give it instructions. The c... - 05:29 AM Support #13211: profiler and getting some memory info with it
- i made it even from mds node which i trying to get heap stats...so i guess problem here is not in a client...
what...
09/24/2015
- 08:21 PM Bug #12674 (Resolved): Semi-reproducible crash of ceph-fuse
- 08:12 PM Feature #13231: kclient: support SELinux
- IMHO, it might be the missing hooks like security_inode_init_security() calls.
- 08:03 PM Feature #13231: kclient: support SELinux
- Greg, from the first 2nd test, ceph fs was able to set xattr (thanks to #1878). But ceph failed to set security.secur...
- 07:43 PM Feature #13231: kclient: support SELinux
- See #5486, #1878, and others in the tracker — I think CephFS is ready for support now, but SELinux needs to get modif...
- 07:19 PM Feature #13231 (Duplicate): kclient: support SELinux
- I cannot set selinux labbels on ceph mount.
Environment:
[root@host16-rack08 ~]# modinfo ceph
filename: /l... - 06:35 PM Support #13211: profiler and getting some memory info with it
- Apparently you're not using a client with enough caps on the MDS to give it instructions. The client.admin key that's...
- 01:20 PM Support #13211: profiler and getting some memory info with it
- same situation with mds memory check onto virtual machines(same osd and mons check goes fine):
root@node1:~# ceph ... - 10:05 AM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Pinged Matt & Adam about this yesterday, Matt's planning to work on it at some stage. In some cases we may want to s...
09/23/2015
- 04:30 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- It looks like the ganesha FSAL interface already includes the function `up_async_invalidate` for this sort of thing, ...
- 02:55 PM Support #13211: profiler and getting some memory info with it
- p.s.
mon and ceph info works fine:
ceph tell osd.0 heap stats
ceph tell mon.mon00 heap stats - 02:26 PM Support #13211 (Closed): profiler and getting some memory info with it
- ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
3.13.0-61-generic #100-Ubuntu 2015 x86_64 x86_64 x86... - 07:17 AM Bug #12674: Semi-reproducible crash of ceph-fuse
- I can confirm that the issue is gone with the latest ceph source from Git. Sorry for taking so long to build and test...
- 03:20 AM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- For firely, standby-replay MDS also uses 0 as client_incarnation, its ID is MDS.x.0.
- 01:15 AM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- Zheng, can you dig up a firefly test run and make sure the behavior of standby-replay daemons there is the same as it...
09/22/2015
- 01:49 PM Bug #13129 (Rejected): qa: not starting ceph-mds daemons
- 08:33 AM Bug #13167 (Fix Under Review): mds: replay gets stuck (on out-of-order journal replies?)
- 08:33 AM Bug #13167: mds: replay gets stuck (on out-of-order journal replies?)
- https://github.com/ceph/ceph/pull/6025
- 02:40 AM Bug #12674: Semi-reproducible crash of ceph-fuse
- It's likely been fixed by pull request https://github.com/ceph/ceph/pull/4753 (it's large change, we haven't back-por...
- 02:30 AM Support #13055: Problem with disconnect fuse by mds
- It's likely been fixed by pull request https://github.com/ceph/ceph/pull/4753. could you try compiling ceph-fuse from...
09/21/2015
- 11:14 PM Feature #12204 (Resolved): ceph-fuse: warn and shut down when there is no MDS present
- https://github.com/ceph/ceph/pull/5416
Thanks Yuan Zhou! - 11:13 PM Bug #12971 (Resolved): TestQuotaFull fails
- https://github.com/ceph/ceph/pull/5942
- 11:10 PM Bug #11835 (Resolved): FuseMount.umount_wait can hang
- 10:51 PM Bug #12506 (Resolved): "Fuse mount failed to populate" error
- 09:18 PM Bug #13167: mds: replay gets stuck (on out-of-order journal replies?)
- We should be detecting holes in the journal and shutting down with a nice message or clear assert or something instea...
- 12:59 PM Bug #13167 (Duplicate): mds: replay gets stuck (on out-of-order journal replies?)
- Write_pos of journal seems to be pointing to somewhere in object 200.00000002, But size of object 200.00000001 is 313...
- 08:31 PM Feature #13193 (Duplicate): qa: test behavior on cache pools
- 06:48 PM Feature #13193 (Duplicate): qa: test behavior on cache pools
- We don't run the FS on any cache pools right now. We should start doing so on an appropriate assortment (TBD). Probab...
- 01:44 PM Bug #12674: Semi-reproducible crash of ceph-fuse
- Please also note that this crash can be triggered with a simple operation of the style
> find . -name \*.txt | xa... - 01:20 PM Bug #12674: Semi-reproducible crash of ceph-fuse
- RP https://github.com/ceph/ceph/pull/4753 may fix this issue. could you try compiling ceph-fuse from the newest ceph ...
- 11:10 AM Bug #12674: Semi-reproducible crash of ceph-fuse
- > Status changed from New to Need More Info
What kind of info are you looking for? As I've just hit another situat... - 01:32 PM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- 07:44 AM Bug #13166 (Fix Under Review): MDS: standby-replay does not change client_incarnation properly
- https://github.com/ceph/ceph/pull/6003
- 01:25 PM Support #13055: Problem with disconnect fuse by mds
- attached last logs.
last error msg is different from previous, so maybe i should attach other when they happen? - 01:16 PM Support #13055: Problem with disconnect fuse by mds
- I can't open the issue neither. But this seems to be duplicate of #12674
- 01:08 PM Support #13055: Problem with disconnect fuse by mds
- We use clients with kernel version 3.13.0 (current client host which got that error have 3.13.0-58-generic, others ha...
- 03:05 AM Support #13055: Problem with disconnect fuse by mds
- We never saw this backtrace before. Which version of kernel are you using? Please apply the attached debug patch to t...
09/20/2015
09/19/2015
- 12:07 AM Bug #13167 (Resolved): mds: replay gets stuck (on out-of-order journal replies?)
- ubuntu-2015-09-17_16:55:52-fs-greg-fs-testing---basic-multi/1061690/ceph-mds.a.log
This MDS went in and out of rep... - 12:04 AM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- If we need more logs, I copied the standby MDS log to ubuntu-2015-09-17_16:55:52-fs-greg-fs-testing---basic-multi/106...
09/18/2015
- 11:52 PM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- Obvious fix is to have MDSRank check the incarnation and update, but I want us to look more deeply at how the replayi...
- 11:24 PM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- Hmm, I don't think we should actually be doing operations as mds.0.0 when we're a standby for the real mds.0.0 either...
- 11:22 PM Bug #13166: MDS: standby-replay does not change client_incarnation properly
- 10:59 PM Bug #13166 (Resolved): MDS: standby-replay does not change client_incarnation properly
- ...
- 01:02 PM Support #13055: Problem with disconnect fuse by mds
- here it is:
(gdb) bt
#0??0x00007f9afcad779b in raise (sig=11) at ../nptl/sysdeps/unix/sysv/linux/pt-raise.c:37
#... - 12:50 PM Support #13055: Problem with disconnect fuse by mds
- coredump file is useless without execute file. could you use gdb to check the coredump file and send backtrace to us ...
- 06:16 AM Support #13055: Problem with disconnect fuse by mds
- http://148.251.180.58/files/cores/core.ceph-fuse.15542.tar.gz - core dump
attached file-log
- 05:40 AM Support #13055: Problem with disconnect fuse by mds
- Greetings.
We have the same problem after compiling and merging from git. Still ceph-fuse disconnects. after some ti... - 01:36 AM Bug #12506 (In Progress): "Fuse mount failed to populate" error
- https://github.com/ceph/ceph/pull/5966#issuecomment-141302916
09/17/2015
- 04:03 PM Bug #12506 (Fix Under Review): "Fuse mount failed to populate" error
- https://github.com/ceph/ceph/pull/5966
- 12:48 PM Bug #12506: "Fuse mount failed to populate" error
- this timeout only happens for jobs that contain clusters/standby-replay.yaml. I reproduce this issue locally by set "...
- 02:52 PM Bug #13129 (Need More Info): qa: not starting ceph-mds daemons
- That commit is now about 16 hours old, and these jobs were started about 24 hours ago — although they didn't get mach...
- 07:52 AM Bug #13129: qa: not starting ceph-mds daemons
- the bug is introduced by commit 685d76a77ca16ca601a99148ef507cfde1fb3593 "ceph: wait for CephFS to be healthy before ...
- 12:56 PM Bug #13138: Segfault shutting down python-cephfs
- 12:48 PM Bug #13138 (Fix Under Review): Segfault shutting down python-cephfs
- https://github.com/ceph/ceph/pull/5964
- 10:36 AM Bug #13138: Segfault shutting down python-cephfs
- So perhaps we weren't seeing this in automated tests because you have to keep the process alive for a while after shu...
- 10:17 AM Bug #13138 (Resolved): Segfault shutting down python-cephfs
- ...
09/16/2015
- 10:42 PM Bug #13129 (Rejected): qa: not starting ceph-mds daemons
- http://pulpito.ceph.com/teuthology-2015-09-14_23:10:02-knfs-master-testing-basic-multi/1057697
http://pulpito.ceph.c... - 10:23 PM Bug #12506: "Fuse mount failed to populate" error
- This is continuing to cause trouble in the nightlies. http://pulpito.ceph.com/teuthology-2015-09-14_23:04:02-fs-maste...
- 09:50 PM Fix #13126: qa: ceph-fuse flushes very slowly in some workunits
- This is blocked by/equivalent to #13127 in the Ceph project.
- 09:45 PM Fix #13126 (Resolved): qa: ceph-fuse flushes very slowly in some workunits
- Our ffsb and fsync workunits both generate lots of small IOs at random offsets, each of which becomes its own Op to t...
09/15/2015
- 10:02 PM Bug #12895: Failure in TestClusterFull.test_barrier
- Hmm, this is a bit weird. I don't know why mount_a is sometimes holding on to caps that get revoked when mount_b cre...
- 09:14 PM Bug #12971 (Fix Under Review): TestQuotaFull fails
- 08:12 PM Bug #13107: ceph-fuse: handle multiple mounts on the same host (don't set nonce with only PID)
- Yeah, Josh is working on a fix for this in all the clients. libcephfs apparently already solves this problem with 48 ...
- 08:09 PM Bug #13107: ceph-fuse: handle multiple mounts on the same host (don't set nonce with only PID)
- Hah, if I recall correctly I was a bit scandalised back in 2014 when I found out we treated a PID as a nonce :-) Set...
- 07:41 PM Bug #13107 (Resolved): ceph-fuse: handle multiple mounts on the same host (don't set nonce with o...
- See #13032. It's possible for multiple processes on the same host to have the same PID if they're in separate namespa...
- 12:37 PM Feature #12137: cephfs-data-scan: backward scan of dirfrag objects, inject orphans
non-fragmented-dir case: https://github.com/ceph/ceph/pull/5941- 11:24 AM Bug #13067: MDSRank unhealthy on hammer -> infernalis upgrade
- 08:36 AM Bug #13067 (Fix Under Review): MDSRank unhealthy on hammer -> infernalis upgrade
- https://github.com/ceph/ceph/pull/5937/files
- 07:15 AM Support #13055 (Closed): Problem with disconnect fuse by mds
- 06:50 AM Support #13055: Problem with disconnect fuse by mds
- thx. topic can be closed
- 06:46 AM Support #13055: Problem with disconnect fuse by mds
- Yes, we have fixed it. But the fix was not included in any release (We fix it after releasing v0.94.3), You need to ...
- 06:21 AM Support #13055: Problem with disconnect fuse by mds
- i have found the SAME problem at ceph tracker: http://tracker.ceph.com/issues/12297
logs are the same as into the 12...
09/14/2015
- 08:05 PM Fix #11187 (Resolved): ceph-qa-suite: reduce required machine numbers
- This is done for the regular suite, at least, and has not been provoking new problems that I can tell.
- 01:02 PM Support #13055: Problem with disconnect fuse by mds
- Sergey Mir wrote:
> Thx,for now i need to wait next disconnect of fuse to look the message log...
> Besides,i have ... - 11:17 AM Support #13055: Problem with disconnect fuse by mds
- Thx,for now i need to wait next disconnect of fuse to look the message log...
Besides,i have changed filestore queue... - 08:02 AM Support #13055: Problem with disconnect fuse by mds
- make sure /var/log/ceph directory exist and have proper permission. add "debug client = 20" to client section of ceph...
- 07:28 AM Support #13055: Problem with disconnect fuse by mds
- there is no such log files on the client side where ceph-fuse is using...
as i understand logging is turned off, so ... - 06:34 AM Support #13055: Problem with disconnect fuse by mds
- Sergey Mir wrote:
> Zheng Yan wrote:
> Can you find any clue in ceph-fuse.log?
>
> there is no ceph-fuse.log on... - 06:21 AM Support #13055: Problem with disconnect fuse by mds
- Sergey Mir wrote:
> ceph-fuse process still works after "disconnect" so i should make umount and only then mount to ... - 06:19 AM Support #13055: Problem with disconnect fuse by mds
- Sergey Mir wrote:
> hello, is there any way to fix the problem?
try "ceph daemon client.xxx kick_stale_sessions" - 05:18 AM Support #13055: Problem with disconnect fuse by mds
- hello, is there any way to fix the problem?
09/12/2015
- 04:54 PM Support #13055: Problem with disconnect fuse by mds
- ceph-fuse process still works after "disconnect" so i should make umount and only then mount to back to work...
- 03:14 PM Bug #13067: MDSRank unhealthy on hammer -> infernalis upgrade
- Hmm, kinda interesting that it's happening at the point in the log where the mons are restarted. Something in monc b...
- 12:38 PM Bug #13067 (Resolved): MDSRank unhealthy on hammer -> infernalis upgrade
- ...
09/11/2015
- 04:01 PM Support #13055: Problem with disconnect fuse by mds
- when "transported point is not connected" happen, could you check if the ceph-fuse process still exists. If not, run ...
- 02:35 PM Support #13055: Problem with disconnect fuse by mds
- and about high load... i've checked mds nodes for memory overload or some other physical overload - didnt find anythi...
- 02:31 PM Support #13055: Problem with disconnect fuse by mds
- Zheng Yan wrote:
Can you find any clue in ceph-fuse.log?
there is no ceph-fuse.log on clients...
fuse have same... - 02:26 PM Support #13055: Problem with disconnect fuse by mds
- dmesg and other clients logs are empty...
i have 5 clients with mounted to ceph.
3 of them are using more often(ma... - 02:20 PM Support #13055: Problem with disconnect fuse by mds
- "Transport endpoint is not connected" almost always means that the ceph-fuse client process has gone away. Check dmes...
- 02:20 PM Support #13055: Problem with disconnect fuse by mds
- Zheng Yan,
yes,as i pasted earlier in my config, i have changed it to mds_session_autoclose = 60 - 02:19 PM Support #13055: Problem with disconnect fuse by mds
- Sergey Mir wrote:
> fuse is just disconnecting from ceph and says "transported point is not connected" and thats all... - 02:14 PM Support #13055: Problem with disconnect fuse by mds
- >> 2015-09-10 13:11:17.068839 mds.0 [INF] closing stale session client.343916 (client_ip):0/8623 after 64.626196
MDS... - 01:59 PM Support #13055: Problem with disconnect fuse by mds
- fuse is just disconnecting from ceph and says "transported point is not connected" and thats all, no logs or some oth...
- 01:41 PM Support #13055: Problem with disconnect fuse by mds
- We need to know how the system is misbehaving -- not just to see log snippets.
What is the symptom of the problem ... - 12:59 PM Support #13055: Problem with disconnect fuse by mds
- Need some solution to solve that problem. it drops again...
2015-09-11 15:42:57.450895 7fb118d06700 0 -- (mds00/m... - 11:09 AM Support #13055: Problem with disconnect fuse by mds
- fuse loose connections few times a day with output in logs as i pasted at my previous message from few different log ...
- 10:43 AM Support #13055: Problem with disconnect fuse by mds
- The log snippets you've pasted don't make much sense — you have some entries going back in time, and it looks like th...
- 10:25 AM Support #13055 (Closed): Problem with disconnect fuse by mds
- Hello. Nobody knows answer in hex chat(ceph) about, so im asking advice here.
i have ceph version 0.94.3 (95cefea9fd... - 01:37 PM Backport #13044 (In Progress): LibCephFS.GetPoolId failure
- 09:06 AM Backport #13044 (Resolved): LibCephFS.GetPoolId failure
- https://github.com/ceph/ceph/pull/5887
- 01:19 PM Bug #12971 (In Progress): TestQuotaFull fails
- Looks like this commit might have broken it:...
- 02:03 AM Bug #12806: nfs restart failures
09/10/2015
- 11:16 AM Bug #12209: CephFS should have a complete timeout mechanism to avoid endless waiting or unpredict...
- Yes. I'm also not very comfortable with the patches that I've looked at, but it's been a while (they have been langui...
- 11:08 AM Bug #12209: CephFS should have a complete timeout mechanism to avoid endless waiting or unpredict...
- This all sounds a bit strange. The designed behaviour is to block on slow metadata operations, not time out. How ma...
- 11:05 AM Bug #12971: TestQuotaFull fails
- Greg's theory sounds plausible, will look
09/09/2015
- 12:53 PM Backport #12499 (Resolved): ceph-fuse 0.94.2-1trusty segfaults / aborts
- 12:33 PM Bug #12598 (Pending Backport): LibCephFS.GetPoolId failure
- 12:28 PM Bug #13011 (Duplicate): LibCephFS.ReleaseMounted: FAILED assert(crypto_context != __null)
- 12:23 PM Bug #13011 (Duplicate): LibCephFS.ReleaseMounted: FAILED assert(crypto_context != __null)
- This happened on a hammer branch with one CephFS related backport ( https://github.com/ceph/ceph/commit/256620e37fd94...
- 12:10 PM Bug #12875: LibCephFS.LibCephFS.InterProcessLocking segment fault.
- This might be a consequence of the wip-mds-caps branch, and will get more testing in there.
- 12:02 PM Bug #12971: TestQuotaFull fails
- That's not quite the problem. The MDS calls Objecter::unset_honor_osdmap_full(), at which point the Objecter should l...
09/08/2015
- 01:40 PM Bug #12820 (Resolved): stuck looping on 'ls /sys/fs/fuse/connections'
- 01:39 PM Bug #12875 (Can't reproduce): LibCephFS.LibCephFS.InterProcessLocking segment fault.
- 08:09 AM Bug #12875 (Need More Info): LibCephFS.LibCephFS.InterProcessLocking segment fault.
- The lockdep errors are side effect of previous failiure....
- 09:23 AM Bug #12971: TestQuotaFull fails
- the objecter pauses all write osd request (including delete request) when cluster is full or pool-quota is reached. T...
- 08:29 AM Bug #12674 (Need More Info): Semi-reproducible crash of ceph-fuse
09/07/2015
- 10:53 AM Bug #12732 (Resolved): very slow read when a file has holes.
- https://github.com/ceph/ceph-client/commit/3e8b3d8cbf92aba5485e68bc5cba6eee2075ee71
- 10:33 AM Bug #12895: Failure in TestClusterFull.test_barrier
- ...
- 08:38 AM Bug #12971: TestQuotaFull fails
- ...
Also available in: Atom