Activity
From 05/17/2016 to 06/15/2016
06/15/2016
- 02:00 PM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- $ sudo lsof /dev/rbd1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
multipath 31388 root 10u BL... - 10:50 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- What about lsof right before the unmap?
- 10:26 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- 'sudo multipath -ll' doesn't show anything at all
- 10:16 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Well, if multipathd is holding the device, as one of your pastes shows, you won't be able to unmap it - that's expect...
- 10:02 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- A bit more details about my environment:
$ ceph -v
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
... - 07:56 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- could you try setting 'filestore_journal_writeahead=true' for all OSDs and check if this bug still happens
06/14/2016
- 02:13 PM Bug #16294 (Closed): rbd: map failed
- Hi,
Will someone please assist, I am trying to map image and this happens:
cluster-admin@nodeB:~/.ssh/ceph-clus...
06/13/2016
- 11:11 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Here is output from console:
* http://paste.openstack.org/show/510493/
* http://paste.openstack.org/show/510494/
... - 10:48 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Can you paste your reproducer here, including how you set up multipath and map the rbd image?
- 10:43 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Here is related bug in Ubuntu: https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1581764
I can reprod...
06/12/2016
- 06:23 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- Retry attach log file.
- 06:16 AM Bug #16242 (Resolved): Hit kernel bug in ceph_set_page_dirty
- In a newly setup ceph system, while doing some basic performance test, we hit a kernel bug with the following trace:
...
06/07/2016
06/06/2016
- 01:28 PM Bug #16030 (Fix Under Review): concurrent.sh coredumps
- https://github.com/ceph/ceph/pull/9517
- 09:59 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- The high-level summary is in #9779. In particular, request_mutex was killed in favor of with per-OSD mutexes.
We ... - 06:14 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- In 4.7, same issue should not happen but 4.6 or older?
If you could tell me why you changed to libceph, it would be ...
06/05/2016
- 09:39 PM Bug #14737 (In Progress): libkrbd vs udev event ordering
- 08:19 PM Bug #13328 (Resolved): fix notify completion race
- Done in 4.7 by way of #9779.
- 08:17 PM Cleanup #1768 (Closed): osd_client: gratuitous ceph_monc_request_next_osdmap calls
- OSD client has been rewritten in 4.7. We now rely on handle_timeout(), which requests a new map if there are homeles...
- 07:55 PM Bug #15694 (Need More Info): process enter D status when running on rbd
- 07:55 PM Bug #3887 (Closed): kernel client: small object memory leak
- This is too old to be relevant.
- 07:44 PM Feature #10585 (Resolved): use new, more reliable version of watch/notify
- Done in 4.7 by way of #9779.
- 07:42 PM Feature #9779 (Resolved): libceph: sync up with objecter
- Done in 4.7. Highlights:
- per-session request trees (vs a global per-client tree)
- per-session locking (vs a g... - 07:27 PM Bug #12254: krbd image watch
- Fixed with #9779 in 4.7.
- 06:24 PM Bug #11960: Kernel panic when deleting a pool, which contains a mapped RBD
- Fixed with #9779 in 4.7.
- 05:31 PM Feature #8842 (Resolved): CephFS kernel module for RHEL7.0 GA
- 05:26 PM Bug #13081 (Resolved): data on rbd image get corrupted when pool quota is smaller than the size o...
- Done in 4.7. The umount in step 8 blocks until the quota is increased - there is nothing else we (the block device) ...
- 04:23 PM Feature #14201 (Resolved): cope with a pool being removed from under a mapped image
- Done in 4.7. In-flight requests are -EIOed, the filesystem remounts itself read-only. The filesystem can be unmount...
- 04:13 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- OSD client has been rewritten in 4.7, but I'd be happy to look into this if the necessary info is gathered.
- 04:06 PM Bug #14022: map_sem for read + request_mutex are held indefinitely
- OSD client has been rewritten in 4.7.
- 04:05 PM Bug #15919 (Resolved): BUG_ON in ceph_readdir() on dbench, fsstress
- 04:03 PM Bug #10889 (Need More Info): krbd: sent out of order write
- OSD client has been rewritten in 4.7.
- 04:02 PM Bug #12691: ffsb osd thrash test - osd/ReplicatedPG.cc: 2348: FAILED assert(0 == "out of order op")
- OSD client has been rewritten in 4.7.
- 04:00 PM Bug #14901 (Need More Info): misdirected requests on 4.2 during rebalancing
- OSD client has been rewritten in 4.7.
06/01/2016
05/27/2016
- 04:17 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Just to track
10k write / hour => No hung up.
05/25/2016
- 07:23 PM Bug #16030 (Resolved): concurrent.sh coredumps
- http://pulpito.ceph.com/dis-2016-05-24_13:05:19-krbd-master-wip-osdc-basic-mira/211701/
http://pulpito.ceph.com/dis-... - 10:16 AM Feature #14201 (In Progress): cope with a pool being removed from under a mapped image
05/23/2016
- 07:42 AM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- I tried coreos 1053.2.0, no problem found.
05/20/2016
- 02:12 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- Matthew Garrett wrote:
> We're seeing exactly the same issue on 4.6 - all writes are giving EIO, but no errors are b... - 05:45 AM Bug #15946: xfs layering on rbd block device was corrupted
- [root@cluster-stack01 ~]# !mount
mount /dev/rbd0 /mnt/rbd/testing01
[root@cluster-stack01 ~]# ls /mnt/rbd/testing01... - 05:27 AM Bug #15946: xfs layering on rbd block device was corrupted
- *# Call Trace*
May 19 04:27:29 cluster-stack01 kernel: Key type ceph registered
May 19 04:27:29 cluster-stack01 ker... - 05:26 AM Bug #15946 (Closed): xfs layering on rbd block device was corrupted
- Just to tracking purpose.
Forcefully unmounted xfs file system was never been recovered.
During FIO, when I inter...
05/19/2016
- 10:13 PM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- If you've got an issue with vanilaa 4.6 and a proper userspace release, please open a new bug with the full details; ...
- 06:13 AM Bug #15919: BUG_ON in ceph_readdir() on dbench, fsstress
- updated commit "ceph: using hash value to compose dentry offset"
05/18/2016
- 09:04 PM Bug #15919: BUG_ON in ceph_readdir() on dbench, fsstress
- Here is a stack trace from an undamaged kdb:...
- 10:46 AM Bug #15919 (Resolved): BUG_ON in ceph_readdir() on dbench, fsstress
- Yuri reported three smithis stuck in kdb. kdb was misconfigured so I couldn't get anything reliable out of it (not e...
- 10:14 AM Bug #15891 (Need More Info): [rbd] i/o to rbd block device stopped constantly
- 10:13 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- If you still have it in that state or if it reproduces again, could you please do as described in http://tracker.ceph...
- 12:25 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Sorry, please ignore #4, #5. They are same of trace log in 1st report -;
And that trace log is 2nd hung up.
Here ... - 12:10 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Here is 2nd hung up.
May 15 20:31:28 d02 kernel: INFO: task kswapd0:76 blocked for more than 120 seconds.
May 15 ... - 12:08 AM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- Here is 2nd hung up.
May 15 20:31:28 dvct02 kernel: INFO: task kswapd0:76 blocked for more than 120 seconds.
May ...
05/17/2016
- 11:06 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- There are 2 clients mapping same rbd block device on top of rbd pool.
- 10:22 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- There were 50k per hour random write to rbd utilized with xfs on the client.
Each write size was 80k bytes.
*kern... - 10:13 PM Bug #15891: [rbd] i/o to rbd block device stopped constantly
- *ceph.conf*
[snip]
debug_rbd_replay = 0/5
rbd_op_threads = 1
rbd_op_thread_timeout = 60
rbd_non_blocking_aio = t... - 01:52 AM Bug #14360: Cephfs kernel client ceph_send_cap_releases hung task
- We're seeing exactly the same issue on 4.6 - all writes are giving EIO, but no errors are being printed.
Also available in: Atom