Activity
From 04/28/2016 to 05/27/2016
05/27/2016
- 04:17 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Just to track
10k write / hour => No hung up.
05/25/2016
- 07:23 PM Bug #16030 (Resolved): concurrent.sh coredumps
- http://pulpito.ceph.com/dis-2016-05-24_13:05:19-krbd-master-wip-osdc-basic-mira/211701/
http://pulpito.ceph.com/dis-... - 10:16 AM Feature #14201 (In Progress): cope with a pool being removed from under a mapped image
05/23/2016
- 07:42 AM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I tried coreos 1053.2.0, no problem found.
05/20/2016
- 02:12 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- Matthew Garrett wrote:
> We're seeing exactly the same issue on 4.6 - all writes are giving EIO, but no errors are b... - 05:45 AM Bug #15946: xfs layering on rbd block device was corrupted
- [root@cluster-stack01 ~]# !mount
mount /dev/rbd0 /mnt/rbd/testing01
[root@cluster-stack01 ~]# ls /mnt/rbd/testing01... - 05:27 AM Bug #15946: xfs layering on rbd block device was corrupted
- *# Call Trace*
May 19 04:27:29 cluster-stack01 kernel: Key type ceph registered
May 19 04:27:29 cluster-stack01 ker... - 05:26 AM Bug #15946 (Closed): xfs layering on rbd block device was corrupted
- Just to tracking purpose.
Forcefully unmounted xfs file system was never been recovered.
During FIO, when I inter...
05/19/2016
- 10:13 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- If you've got an issue with vanilaa 4.6 and a proper userspace release, please open a new bug with the full details; ...
- 06:13 AM Bug #15919: BUG_ON in ceph_readdir() on dbench, fsstress
- updated commit "ceph: using hash value to compose dentry offset"
05/18/2016
- 09:04 PM Bug #15919: BUG_ON in ceph_readdir() on dbench, fsstress
- Here is a stack trace from an undamaged kdb:...
- 10:46 AM Bug #15919 (Resolved): BUG_ON in ceph_readdir() on dbench, fsstress
- Yuri reported three smithis stuck in kdb. kdb was misconfigured so I couldn't get anything reliable out of it (not e...
- 10:14 AM Bug #15891 (Need More Info): [rbd] i/o to rbd block device stopped constantly
- 10:13 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- If you still have it in that state or if it reproduces again, could you please do as described in http://tracker.ceph...
- 12:25 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Sorry, please ignore #4, #5. They are same of trace log in 1st report -;
And that trace log is 2nd hung up.
Here ... - 12:10 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Here is 2nd hung up.
May 15 20:31:28 d02 kernel: INFO: task kswapd0:76 blocked for more than 120 seconds.
May 15 ... - 12:08 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Here is 2nd hung up.
May 15 20:31:28 dvct02 kernel: INFO: task kswapd0:76 blocked for more than 120 seconds.
May ...
05/17/2016
- 11:06 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- There are 2 clients mapping same rbd block device on top of rbd pool.
- 10:22 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- There were 50k per hour random write to rbd utilized with xfs on the client.
Each write size was 80k bytes.
*kern... - 10:13 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- *ceph.conf*
[snip]
debug_rbd_replay = 0/5
rbd_op_threads = 1
rbd_op_thread_timeout = 60
rbd_non_blocking_aio = t... - 01:52 AM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- We're seeing exactly the same issue on 4.6 - all writes are giving EIO, but no errors are being printed.
05/16/2016
- 01:27 PM Bug #15887: client reboot stuck if the ceph node is not reachable or shutdown
- Also http://www.spinics.net/lists/ceph-devel/msg27376.html, http://tracker.ceph.com/issues/13189.
In the cephfs ca... - 09:53 AM Bug #15887: client reboot stuck if the ceph node is not reachable or shutdown
- See also: http://tracker.ceph.com/issues/9477
- 07:23 AM Bug #15891 (Resolved): [rbd] i/o to rbd block device stopped constantly
- I/O to rbd block device stopped working with the following message.
RBD is utilized by xfs and used for *dovecot*!
...
05/13/2016
- 08:26 PM Bug #15887 (Won't Fix): client reboot stuck if the ceph node is not reachable or shutdown
- If the cluster is unavailable, we can't do a clean shutdown. I guess we could try and distinguish between dirty reque...
- 07:56 PM Bug #15887 (Won't Fix): client reboot stuck if the ceph node is not reachable or shutdown
- 1) mount cephfs on client,
2) shutdown osd+mon node or make it not reachable
3) while client is accessing the mou... - 12:39 PM Bug #15845: Kernel crash after unmounting CephFS mountpoint
- BUG at fs/fscache/cookie.c:524 is caused by inode reference leak, The leak is likely fixed by https://github.com/ceph...
- 08:44 AM Bug #15845: Kernel crash after unmounting CephFS mountpoint
- This is the current situation after the crash two days ago and one backup run (ceph daemon mds.X session ls entry for...
05/12/2016
- 07:27 PM Bug #15845: Kernel crash after unmounting CephFS mountpoint
- Given those first two lines......
05/11/2016
- 12:00 PM Bug #15845 (Resolved): Kernel crash after unmounting CephFS mountpoint
- We had a server crash several hours after unmounting a CephFS mountpoint.
OS: Ubuntu 14.04 with Xenial LTS backpor...
05/10/2016
- 12:13 PM Bug #15780: Applications using kernel based cephfs and mmap fail with SIGBUS if debugger is attac...
- Fixed by https://github.com/ceph/ceph-client/commit/11eeba2b7624a8e707c14ef1e9bee214d6fe95d4
05/09/2016
- 01:38 PM Bug #15780 (Resolved): Applications using kernel based cephfs and mmap fail with SIGBUS if debugg...
- OS/Kernel: Ubuntu Trusty with Xenial LTS backport kernel 4.4.0-21-generic
Ceph: 0.94.6
The PTRACE_ATTACH syscall ... - 10:00 AM Bug #15683 (Closed): RBD image features settings are not copied onto the clone.
- Not a kernel client issue, duplicates #15388.
- 09:53 AM Bug #15694: process enter D status when running on rbd
- "socket closed (con state OPEN)" are harmless, most likely.
Could you check your OSD logs for any "slow request" w... - 09:45 AM Bug #13081 (In Progress): data on rbd image get corrupted when pool quota is smaller than the siz...
05/04/2016
- 06:31 PM Bug #15735 (Duplicate): core dump during krbd concurrent.sh
- Not sure if mon core dumped or its something from concurrent.sh
Env:
description: krbd/rbd-nomount/{conf.yaml c... - 06:11 PM Bug #15734 (Resolved): xfs test failure during krbd run
- Two client test, on one client one test failed
2016-05-03T22:37:18.259 INFO:tasks.rbd.client.0.vpm012.stdout:xfs/014... - 08:42 AM Bug #12160 (Resolved): kernel BUG at fs/ceph/caps.c:2307, ceph_put_wrbuffer_cap_refs
05/03/2016
- 02:33 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- I don't think that's acceptable unless we have a way of verifying nobody else has modified (or maybe even just touche...
- 03:01 AM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Greg Farnum wrote:
> Zheng, does that patch mean that the client no longer drops any dirty state? Or does it keep th... - 05:37 AM Bug #15694 (Closed): process enter D status when running on rbd
- Hi,
Seldom, some kind of process running with data locate on rbd(formated with ext4) may enter D status(stuck).
I c... - 04:08 AM Bug #12159 (Need More Info): setfilelock requests stuck
- 03:59 AM Bug #14487 (Resolved): ctime may go backward in cephfs kernel client
- 03:58 AM Bug #14232 (Resolved): Kernel NULL pointer dereference in __dcache_readdir
- 03:58 AM Bug #14485 (Resolved): cephfs kernel client does not flush ctime
- 03:57 AM Bug #15552 (Closed): cephfs kernel client timeout error 110
05/02/2016
- 04:46 PM Bug #15683: RBD image features settings are not copied onto the clone.
- tnx
- 04:41 PM Bug #15683: RBD image features settings are not copied onto the clone.
- See ticket #15388
- 09:09 AM Bug #15683: RBD image features settings are not copied onto the clone.
- Attaching the output from all commands:-
root@host1:~$ ceph osd pool create pool4 1024
pool 'pool4' created
ro... - 09:06 AM Bug #15683 (Closed): RBD image features settings are not copied onto the clone.
- Steps to reproduce:-
1) Create rbd image
2) Disable some features like layering, exclusive-lock, object-map etc....
04/29/2016
- 08:44 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I'm not seeing those warnings. strace shows read() and write() immediately returning EIO. I'll see if I can try a 4.6...
- 12:00 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- Matthew Garrett wrote:
> I've pulled the 4.6 ceph code back to 4.5, but any attempt to read or write to a file is no... - 01:55 PM Bug #13712 (Resolved): Page allocation failure in Kernel 4.2.3.
- Ah, this was a GFP_ATOMIC allocation (which is not allowed to block and therefore more likely to fail). We used to r...
- 12:18 PM Bug #15448 (Resolved): Jewel : KRBD : rbd map is failing, we should log correct error message eit...
- "rbd: report unsupported features to syslog" in 4.6-rc6.
- 12:17 PM Bug #15490 (Resolved): rbd map vs notify race
- "rbd: fix rbd map vs notify races" in 4.6-rc6.
- 12:16 PM Bug #15447 (Resolved): use-after-free on ceph_auth_none_info / ceph_none_authorizer
- "libceph: make authorizer destruction independent of ceph_auth_client" in 4.6-rc6.
04/28/2016
- 07:44 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I've pulled the 4.6 ceph code back to 4.5, but any attempt to read or write to a file is now giving me EIO. Is there ...
Also available in: Atom