Activity
From 06/14/2016 to 07/13/2016
07/13/2016
- 12:52 AM Cleanup #2085: kclient: improve mtime update in page_mkwrite
- Still a problem?
07/11/2016
- 07:59 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- 07:58 AM Bug #15845 (Resolved): Kernel crash after unmounting CephFS mountpoint
- 07:58 AM Bug #15780 (Resolved): Applications using kernel based cephfs and mmap fail with SIGBUS if debugg...
- 07:56 AM Support #15302 (Resolved): Kernel panic when using CEPHFS (Hammer) /w kernel vivid-lts on Ubuntu ...
07/08/2016
- 12:23 PM Bug #15735 (Duplicate): core dump during krbd concurrent.sh
- 12:18 PM Bug #15694: process enter D status when running on rbd
- If you can't upgrade to one of the newer kernels and can build your own, try cherry-picking commits
5a60e87603... - 12:11 PM Bug #16294 (Closed): rbd: map failed
- Kernels starting with 4.6 output an error to dmesg.
Newer rbd tool also prints a helpful message:... - 11:46 AM Bug #16630 (Closed): rbd map vs rbd_watch_errcb deadlock
- ...
07/07/2016
- 11:53 AM Feature #15107: kcephfs: be less pessimistic about rcu walk in d_revalidate
- First patches are now in the testing branch. I think it's probably also possible to queue the lease renewal to a work...
07/06/2016
- 05:39 PM Feature #16480: kernel inline data write suport
- Background info:
http://tracker.ceph.com/projects/ceph/wiki/Inline_data_support_for_Ceph
http://tracker.ceph.com/... - 11:50 AM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Sage Weil wrote:
> The most correct thing is probably to throw out the dirty data (I think that's what we do now?)...
06/27/2016
- 11:15 AM Bug #16294: rbd: map failed
- Hi,
I disabled the "exclusive-lock, object-map, fast-diff,
deep-flatten" features on the image.
I can map imag... - 11:02 AM Feature #15107: kcephfs: be less pessimistic about rcu walk in d_revalidate
- Yeah, looks quite doable to handle some of these cases in RCU-walk mode. The main limitation is that you can't sleep ...
- 07:03 AM Feature #16480 (Rejected): kernel inline data write suport
- current kernel client only supports reading inline data, it never write inline data.
06/22/2016
- 09:16 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- Yeah, I think we should be throwing it out as well (I don't know if that actually happens or not). If we wanted to ge...
- 09:08 PM Bug #15255: kclient: lost filesystem operations (on mds disconnect?)
- This type of race has never been addressed:
- client A has dirty writeback data
- client A is disconnected
- mds... - 08:06 AM Bug #16294: rbd: map failed
- All I see in dmesg is :
cluster-admin@nodeB:~/.ssh/ceph-cluster$ dmesg |grep ceph
[144467.750580] init: ceph-osd ...
06/21/2016
- 07:22 PM Bug #16294 (Need More Info): rbd: map failed
- Did you check the output from dmesg?
06/20/2016
- 06:30 AM Bug #15694: process enter D status when running on rbd
- I am very sorry for replying late. Thank you for paying attention to my issue.
Ilya Dryomov wrote:
> "socket clos...
06/16/2016
- 01:24 PM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Depends on what you mean by "support" - rbd CLI tool won't attempt to mess with your multipath settings ;)
- 01:17 PM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Thanks, Ilya,
This workaround works for me. So, there is no support for multipath from the rbd side, right? - 11:44 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- This is likely a bug in either multipathd configuration or udev rules. Either way, not an rbd kernel client issue - ...
- 08:18 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- Zheng Yan wrote:
> no need to do that for ceph-fuse.
OK, we will switch all client to ceph-fuse, thank you so much. - 04:34 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- no need to do that for ceph-fuse.
- 02:46 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- Zheng Yan wrote:
> When using parallel journal, OSD sends two replies for sync write, one with ack flag, another wit... - 02:30 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- When using parallel journal, OSD sends two replies for sync write, one with ack flag, another with ondisk flag. When ...
- 02:13 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- Zheng Yan wrote:
> could you try setting 'filestore_journal_writeahead=true' for all OSDs and check if this bug stil...
06/15/2016
- 02:00 PM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- $ sudo lsof /dev/rbd1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
multipath 31388 root 10u BL... - 10:50 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- What about lsof right before the unmap?
- 10:26 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- 'sudo multipath -ll' doesn't show anything at all
- 10:16 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- Well, if multipathd is holding the device, as one of your pastes shows, you won't be able to unmap it - that's expect...
- 10:02 AM Bug #12763: rbd: unmap failed: (16) Device or resource busy
- A bit more details about my environment:
$ ceph -v
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
... - 07:56 AM Bug #16242: Hit kernel bug in ceph_set_page_dirty
- could you try setting 'filestore_journal_writeahead=true' for all OSDs and check if this bug still happens
06/14/2016
- 02:13 PM Bug #16294 (Closed): rbd: map failed
- Hi,
Will someone please assist, I am trying to map image and this happens:
cluster-admin@nodeB:~/.ssh/ceph-clus...
Also available in: Atom