Activity
From 04/10/2016 to 05/09/2016
05/09/2016
- 01:38 PM Bug #15780 (Resolved): Applications using kernel based cephfs and mmap fail with SIGBUS if debugg...
- OS/Kernel: Ubuntu Trusty with Xenial LTS backport kernel 4.4.0-21-generic
Ceph: 0.94.6
The PTRACE_ATTACH syscall ... - 10:00 AM Bug #15683 (Closed): RBD image features settings are not copied onto the clone.
- Not a kernel client issue, duplicates #15388.
- 09:53 AM Bug #15694: process enter D status when running on rbd
- "socket closed (con state OPEN)" are harmless, most likely.
Could you check your OSD logs for any "slow request" w... - 09:45 AM Bug #13081 (In Progress): data on rbd image get corrupted when pool quota is smaller than the siz...
05/04/2016
- 06:31 PM Bug #15735 (Duplicate): core dump during krbd concurrent.sh
- Not sure if mon core dumped or its something from concurrent.sh
Env:
description: krbd/rbd-nomount/{conf.yaml c... - 06:11 PM Bug #15734 (Resolved): xfs test failure during krbd run
- Two client test, on one client one test failed
2016-05-03T22:37:18.259 INFO:tasks.rbd.client.0.vpm012.stdout:xfs/014... - 08:42 AM Bug #12160 (Resolved): kernel BUG at fs/ceph/caps.c:2307, ceph_put_wrbuffer_cap_refs
05/03/2016
- 02:33 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- I don't think that's acceptable unless we have a way of verifying nobody else has modified (or maybe even just touche...
- 03:01 AM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Greg Farnum wrote:
> Zheng, does that patch mean that the client no longer drops any dirty state? Or does it keep th... - 05:37 AM Bug #15694 (Closed): process enter D status when running on rbd
- Hi,
Seldom, some kind of process running with data locate on rbd(formated with ext4) may enter D status(stuck).
I c... - 04:08 AM Bug #12159 (Need More Info): setfilelock requests stuck
- 03:59 AM Bug #14487 (Resolved): ctime may go backward in cephfs kernel client
- 03:58 AM Bug #14232 (Resolved): Kernel NULL pointer dereference in __dcache_readdir
- 03:58 AM Bug #14485 (Resolved): cephfs kernel client does not flush ctime
- 03:57 AM Bug #15552 (Closed): cephfs kernel client timeout error 110
05/02/2016
- 04:46 PM Bug #15683: RBD image features settings are not copied onto the clone.
- tnx
- 04:41 PM Bug #15683: RBD image features settings are not copied onto the clone.
- See ticket #15388
- 09:09 AM Bug #15683: RBD image features settings are not copied onto the clone.
- Attaching the output from all commands:-
root@host1:~$ ceph osd pool create pool4 1024
pool 'pool4' created
ro... - 09:06 AM Bug #15683 (Closed): RBD image features settings are not copied onto the clone.
- Steps to reproduce:-
1) Create rbd image
2) Disable some features like layering, exclusive-lock, object-map etc....
04/29/2016
- 08:44 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I'm not seeing those warnings. strace shows read() and write() immediately returning EIO. I'll see if I can try a 4.6...
- 12:00 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- Matthew Garrett wrote:
> I've pulled the 4.6 ceph code back to 4.5, but any attempt to read or write to a file is no... - 01:55 PM Bug #13712 (Resolved): Page allocation failure in Kernel 4.2.3.
- Ah, this was a GFP_ATOMIC allocation (which is not allowed to block and therefore more likely to fail). We used to r...
- 12:18 PM Bug #15448 (Resolved): Jewel : KRBD : rbd map is failing, we should log correct error message eit...
- "rbd: report unsupported features to syslog" in 4.6-rc6.
- 12:17 PM Bug #15490 (Resolved): rbd map vs notify race
- "rbd: fix rbd map vs notify races" in 4.6-rc6.
- 12:16 PM Bug #15447 (Resolved): use-after-free on ceph_auth_none_info / ceph_none_authorizer
- "libceph: make authorizer destruction independent of ceph_auth_client" in 4.6-rc6.
04/28/2016
- 07:44 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I've pulled the 4.6 ceph code back to 4.5, but any attempt to read or write to a file is now giving me EIO. Is there ...
04/24/2016
- 04:12 AM Bug #15552: cephfs kernel client timeout error 110
- I found a solution I can live with.
I set:
@ceph osd crush tunables default@
And this works without generati... - 03:17 AM Bug #15552: cephfs kernel client timeout error 110
- OK, so my last update led me to trying a shot in the dark.
running:
@ceph osd crush tunables legacy@
Allowed... - 03:08 AM Bug #15552: cephfs kernel client timeout error 110
- more info, I see this pop up on the mon when running in debug mode:
@2016-04-23 21:03:28.685457 7f2eead54700 1 --...
04/21/2016
- 06:58 AM Bug #15552: cephfs kernel client timeout error 110
- 4.3.0-0.bpo.1-amd64
- 06:44 AM Bug #15552: cephfs kernel client timeout error 110
- which version of kernel do you use
04/20/2016
- 11:03 PM Bug #15552 (Closed): cephfs kernel client timeout error 110
- After upgrading my test environment from infernalis to the latest jewel release candidate last night, I cannot mount ...
04/18/2016
- 07:48 AM Bug #14360 (Resolved): Cephfs kernel client ceph_send_cap_releases hung task
- upstream commit 315f24088048a51eed341c53be66ea477a3c7d16
- 07:47 AM Bug #14736: Kernel cephfs bind-mounts break if parent dir modified elsewhere
- upstream commit 200fd27c8fa2ba8bb4529033967b69a7cbfa2c2e
- 07:42 AM Bug #14736 (Resolved): Kernel cephfs bind-mounts break if parent dir modified elsewhere
04/15/2016
- 07:26 PM Bug #15448 (Fix Under Review): Jewel : KRBD : rbd map is failing, we should log correct error mes...
- 05:28 PM Bug #15490 (Fix Under Review): rbd map vs notify race
- 12:29 PM Bug #15432: kcephfs: umount -f can fail after mds reconnect failure
- find a bug that may cause forced umount hang
https://github.com/ceph/ceph-client/commit/e9344de458cd61efac6cccb98c78...
04/14/2016
- 09:35 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Sage, I think we discussed this a day or two ago in standup but I don't think you've looked at it yet. Please comment...
- 09:33 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Zheng, does that patch mean that the client no longer drops any dirty state? Or does it keep the dirty page cache and...
- 09:51 AM Bug #15432: kcephfs: umount -f can fail after mds reconnect failure
- should be fixed by https://github.com/ceph/ceph-client/commit/11ff967a17ea6ac638f3c154b737a094f29d30bb
- 09:48 AM Bug #15462: ceph: build_path did not end path lookup where expected, namelen is 146, pos is 0
- "ceph: build_path did not end path lookup where expected" does not indicate there is an issue....
04/13/2016
- 08:01 PM Bug #15490 (Resolved): rbd map vs notify race
- ...
04/11/2016
- 08:17 PM Bug #15462 (Won't Fix): ceph: build_path did not end path lookup where expected, namelen is 146, ...
- ...
- 05:46 PM Bug #15447 (Fix Under Review): use-after-free on ceph_auth_none_info / ceph_none_authorizer
- 04:36 PM Bug #15448: Jewel : KRBD : rbd map is failing, we should log correct error message either in syst...
- Thanks Ilya!
- 03:16 PM Bug #15448 (In Progress): Jewel : KRBD : rbd map is failing, we should log correct error message ...
- It can't pretty-print which features are missing, in general. But we do have to make it nicer.
04/10/2016
- 06:36 PM Bug #15448 (Resolved): Jewel : KRBD : rbd map is failing, we should log correct error message eit...
- Jewel : KRBD : rbd map is failing , we should log correct error message either in system logs or in standard error ou...
- 01:15 PM Support #15302: Kernel panic when using CEPHFS (Hammer) /w kernel vivid-lts on Ubuntu 14.04
- v3.18.[26-30] are also affected. I've asked Sasha to queue up "ceph: fix request timestamp encoding".
- 01:01 PM Bug #15447 (Resolved): use-after-free on ceph_auth_none_info / ceph_none_authorizer
- ...
Also available in: Atom