Activity
From 06/14/2017 to 07/13/2017
07/13/2017
- 09:54 AM Bug #20571 (Pending Backport): rbd-mirror: cluster watcher should ignore -EPERM errors against re...
- 07:39 AM Bug #20580: rbd map should warn when creating duplicate devices for the same image
- I think the same image should not be mapped more than once. I modified the kernel code(drivers/block/rbd.c), then rbd...
- 01:30 AM Backport #20515: jewel: IO work queue does not process failed lock request
- @Nathan: you have a very keen nose -- you are correct it will not be trivial.
07/12/2017
- 06:50 PM Backport #20517 (In Progress): kraken: [rbd CLI] map with cephx disabled results in error message
- 06:48 PM Backport #20518 (In Progress): jewel: [rbd CLI] map with cephx disabled results in error message
- 06:47 PM Backport #20515 (Need More Info): jewel: IO work queue does not process failed lock request
- @Jason - I smell non-triviality here
- 06:45 PM Backport #20267 (In Progress): jewel: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 06:44 PM Backport #20265 (In Progress): jewel: [cli] ensure positional arguments exist before casting
- 03:38 PM Bug #20484 (Fix Under Review): snap can be removed while it is been using by rbd-nbd device
- 01:00 PM Feature #15321 (Resolved): Support asynchronous v2 image creation/cloning
- 01:00 PM Backport #17008 (Closed): jewel: Support asynchronous v2 image creation/cloning
- @Nathan: I think we should just close this backport -- it will be a huge change to backport to Jewel.
- 10:24 AM Backport #17008: jewel: Support asynchronous v2 image creation/cloning
- Waiting for https://github.com/ceph/ceph/pull/10896 to be merged - still open.
- 10:21 AM Backport #18137 (In Progress): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- 10:14 AM Backport #18137 (New): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- https://github.com/ceph/ceph/pull/12043 was closed (sparse reads will not be backported for now)
- 10:12 AM Backport #18500: jewel: rbd-mirror: potential race mirroring cloned image
- No change - backport is still non-trivial
- 10:09 AM Backport #18704: jewel: Prevent librbd from blacklisting the in-use librados client
- @Jason: https://github.com/ceph/ceph/pull/12890 has been merged, but the backport is still non-trivial.
- 10:04 AM Backport #19957: jewel: rbd: Lock release requests not honored after watch is re-acquired
- non-trivial backport; needs an RBD developer
07/11/2017
- 08:41 PM Bug #19057 (Won't Fix): krbd suite does not run on hammer (rbd task fails with "No route to host")
- I think we can ignore hammer krbd now?
- 05:45 PM Bug #20571 (Fix Under Review): rbd-mirror: cluster watcher should ignore -EPERM errors against re...
- *PR*: https://github.com/ceph/ceph/pull/16264
- 05:21 PM Bug #20571 (In Progress): rbd-mirror: cluster watcher should ignore -EPERM errors against reading...
- 02:58 PM Bug #20571 (Resolved): rbd-mirror: cluster watcher should ignore -EPERM errors against reading 'r...
- ...
- 03:56 PM Bug #20580 (Resolved): rbd map should warn when creating duplicate devices for the same image
- ...
- 05:34 AM Bug #20567 (Resolved): rbd-mirror do not support ec pools when the primary image use ec data pool.
- The primary image use ec data pool. But when the rbd-mirror create the mirror image, the data pool feature lost and a...
07/08/2017
- 05:59 PM Backport #18324 (Closed): kraken: JournalMetadata flooding with errors when being blacklisted
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18322 (Closed): kraken: librbd::ResizeRequest: failed to update image header: (16) Devi...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18319 (Closed): kraken: rbd status: json format has duplicated/overwritten key
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18289 (Closed): kraken: objectmap does not show object existence correctly
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18279 (Closed): kraken: RBD diff got SIGABRT with "--whole-object" for RBD whose parent...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:57 PM Backport #18275 (Closed): kraken: rbd-nbd: invalid error code for "failed to read nbd request" me...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
07/07/2017
- 07:37 AM Backport #20532 (In Progress): jewel: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/15602
- 01:29 AM Backport #20351 (Fix Under Review): kraken: test_librbd_api.sh fails in upgrade test
07/06/2017
- 11:23 PM Backport #20351 (In Progress): kraken: test_librbd_api.sh fails in upgrade test
- 09:50 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- 09:50 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- Closed the previous pull request[1] and created a new pull request[2].
[1] - https://github.com/ceph/ceph/pull/161... - 02:03 AM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- https://github.com/ceph/ceph/pull/16146
- 07:20 PM Backport #20532 (Resolved): jewel: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/15602
- 07:17 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- *master PR*: https://github.com/ceph/ceph/pull/15611
- 06:31 PM Backport #20266 (In Progress): kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 06:23 PM Backport #20264 (In Progress): kraken: [cli] ensure positional arguments exist before casting
- 05:58 PM Backport #20154 (In Progress): kraken: Potential IO hang if image is flattened while read request...
- 05:39 PM Backport #19336 (In Progress): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 03:47 PM Bug #18447 (Resolved): Potential race when removing two-way mirroring image
- 03:47 PM Backport #19807 (Resolved): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- 03:47 PM Backport #19227 (Resolved): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 03:46 PM Backport #18555 (Resolved): kraken: rbd: Potential race when removing two-way mirroring image
07/05/2017
- 07:01 PM Bug #20509 (Closed): lrbd iscsi ceph::buffer::end_of_buffer
- 05:49 PM Bug #20509: lrbd iscsi ceph::buffer::end_of_buffer
- Indeed, not the right place. The right place in this case would be https://bugzilla.suse.com/enter_bug.cgi?product=SU...
- 04:33 PM Bug #20509 (Closed): lrbd iscsi ceph::buffer::end_of_buffer
- Hi ceph-team,
I am not sure if I created the ticket in the right place but we have a problem with our ceph lrbd --... - 05:44 PM Backport #20518 (Resolved): jewel: [rbd CLI] map with cephx disabled results in error message
- https://github.com/ceph/ceph/pull/16297
- 05:44 PM Backport #20517 (Resolved): kraken: [rbd CLI] map with cephx disabled results in error message
- https://github.com/ceph/ceph/pull/16298
- 05:43 PM Backport #20515 (Resolved): jewel: IO work queue does not process failed lock request
- https://github.com/ceph/ceph/pull/17402
- 05:43 PM Backport #20514 (Rejected): kraken: IO work queue does not process failed lock request
- 03:52 PM Feature #19349 (Resolved): rbd-nbd: add signal handler
- 03:52 PM Backport #19621 (Resolved): kraken: rbd-nbd: add signal handler
- 03:51 PM Bug #19588 (Resolved): Issues with C API image metadata retrieval functions
- 03:50 PM Backport #19611 (Resolved): kraken: Issues with C API image metadata retrieval functions
- 03:48 PM Backport #19794 (Resolved): kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_own...
- 03:30 PM Backport #19174 (In Progress): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- 03:27 PM Backport #19173 (Resolved): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
- 02:26 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- Push branch wip-doc-20437 to my fork[1] of the Ceph project.
[1] - https://github.com/ritz303/ceph/tree/wip-doc-20437 - 12:09 PM Cleanup #17891 (In Progress): Creation of rbd image with format 1 should be disallowed
- 11:54 AM Bug #20168 (Pending Backport): IO work queue does not process failed lock request
- 11:54 AM Bug #19035 (Pending Backport): [rbd CLI] map with cephx disabled results in error message
- 07:37 AM Backport #19872 (Resolved): kraken: [rbd-mirror] failover and failback of unmodified image result...
- 07:37 AM Backport #19833 (Resolved): kraken: Cannot delete some snapshots after upgrade from jewel to kraken
- 07:31 AM Bug #18653 (Resolved): Improve compatibility between librbd + krbd for the data pool
- 07:31 AM Backport #18771 (Resolved): kraken: rbd: Improve compatibility between librbd + krbd for the data...
07/04/2017
- 08:51 PM Backport #19807 (In Progress): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- 09:04 AM Backport #19807 (Need More Info): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- Waiting for https://github.com/ceph/ceph/pull/14577 to be merged
- 04:38 PM Backport #19336 (Need More Info): kraken: rbd: refuse to use an ec pool that doesn't support over...
- Segmentation fault in unittest_librbd
- 08:53 AM Backport #19336 (In Progress): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 10:04 AM Backport #20016 (Need More Info): kraken: rbd-nbd: kernel reported invalid device size (0, expect...
- Wait until https://github.com/ceph/ceph/pull/14540 is merged
- 10:01 AM Backport #20005 (Need More Info): kraken: Lock release requests not honored after watch is re-acq...
- Non-trivial backport; needs an RBD developer.
- 08:57 AM Backport #19621 (In Progress): kraken: rbd-nbd: add signal handler
- 08:54 AM Backport #19609 (In Progress): kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't ret...
- 08:37 AM Bug #20054: librbd memory overhead when used with KVM
- Please ignore the strikethrough in the previous comment...formatting got me again :-)
- 08:35 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the same test with my RBD image mapped to the hypervisor via rbd-nbd (with ceph caching enabled). When ...
- 08:37 AM Backport #19173 (In Progress): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
07/02/2017
- 04:20 AM Bug #20484: snap can be removed while it is been using by rbd-nbd device
- Solution:
https://github.com/ceph/ceph/pull/16057 - 03:43 AM Bug #20484 (Won't Fix): snap can be removed while it is been using by rbd-nbd device
- I maped a snap to rbd-nbd device by cmd:
# rbd-nbd map rbd/test@snap
but if i remove the snap accidentally, then i ...
06/30/2017
- 12:13 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: just to eliminate the guest OS as a possibility, would it be possible for you to re-run using qemu-tcmu o...
- 06:11 AM Bug #20054: librbd memory overhead when used with KVM
- As I can not reproduce the high memory usage when using fio directly, I created a bug report on the qemu side ([1]). ...
- 01:30 AM Bug #20421 (Resolved): [openstack] cinder backup driver fails due to rbd python API change
06/29/2017
- 06:57 PM Bug #19035 (Fix Under Review): [rbd CLI] map with cephx disabled results in error message
- *PR*: https://github.com/ceph/ceph/pull/16024
- 06:16 PM Bug #19442 (Duplicate): rbd expord-diff aren't counting AioTruncate op correctly.
- Duplicate of tracker issue #19570 -- only an issue in hammer due to changes introduced in infernalis to support deep-...
- 06:11 PM Bug #20168 (Fix Under Review): IO work queue does not process failed lock request
- *PR*: https://github.com/ceph/ceph/pull/15860
- 06:09 PM Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
- Seems like it's a duplicate of issue #18117
06/28/2017
- 04:07 PM Backport #20152 (In Progress): hammer: Potential IO hang if image is flattened while read request...
- https://github.com/ceph/ceph/pull/15980
- 08:33 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the test again with an image without exclusive-lock and object map. I also changed iodepth to 16. I got...
06/27/2017
- 09:09 PM Documentation #20437 (In Progress): Convert downstream Ceph iSCSI documentation for upstream
- 09:06 PM Documentation #20437 (Resolved): Convert downstream Ceph iSCSI documentation for upstream
- Convert all the Ceph iSCSI downstream AsciiDoc (adoc) content to reStructuredText (rst) for upstream consumption.
- 03:28 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: Yes, the rbd engine definitely uses the iodepth setting. If running multiple jobs against the same image,...
- 08:58 AM Bug #20054: librbd memory overhead when used with KVM
- Sorry, again the formatting issue. Here the fio file again:...
- 08:56 AM Bug #20054: librbd memory overhead when used with KVM
- I tried to reproduce this issue with 'fio' (no qemu in the loop) over the weekend, but I was not able to get the same...
- 09:39 AM Bug #20426: some generic options can not be passed by rbd-nbd
- Expect to fix it in this PR: https://github.com/ceph/ceph/pull/14135
- 09:38 AM Bug #20426 (Resolved): some generic options can not be passed by rbd-nbd
- #rbd-nbd --help
Usage: rbd-nbd [options] map <image-or-snap-spec> Map an image to nbd device
unmap ...
06/26/2017
- 11:16 PM Bug #20421 (Fix Under Review): [openstack] cinder backup driver fails due to rbd python API change
- *PR*: https://github.com/ceph/ceph/pull/15932
- 10:50 PM Bug #20421 (Resolved): [openstack] cinder backup driver fails due to rbd python API change
- One such example is that the Cinder Backup driver directly creates an RBD exception, but under commit ac580718e9 the ...
06/25/2017
06/23/2017
- 03:50 PM Bug #20393: IO hang in libvirt/rbd VMs...
- A 1 second tail latency on an overloaded cluster can occur. Can you repeat 60-120 second stalled IO? As it is, these ...
- 03:18 PM Bug #20393: IO hang in libvirt/rbd VMs...
- Info of rbd image:
root@cephproxy01:~# rbd -p vms info c9c5db8e-7502-4acc-b670-af18bdf89886_disk
rbd image 'c9c5db8... - 02:37 PM Bug #20393: IO hang in libvirt/rbd VMs...
- 4 files detailing several occurrences uploaded via ceph-post-file to 55ee8d8a-4058-48c5-9586-39c600465e9d.
Note hung... - 01:53 PM Bug #20393 (Closed): IO hang in libvirt/rbd VMs...
- During heavy IO, processes/threads in libvirt-qemu VMs backed by rbd volumes hang, exceeding hung_task_timeout_secs t...
- 12:01 PM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Perfect, thanks.
- 10:13 AM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Hi Jason, thanks for the answer.
Yes I'm very aware this bug report lacks precision. In fact I was in the middle of ... - 12:07 AM Bug #20388 (Need More Info): combination of kvm using librbd from kraken and online resize leads ...
- We will need a repeatable reproducer to try and fix it -- or debug-level logs from the affected librbd instance.
06/22/2017
- 09:44 PM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Just to add some confusion, I'm unable to reproduce this issue on a ubuntu-based machine with librbd from kraken.
So... - 08:38 PM Bug #20388 (Closed): combination of kvm using librbd from kraken and online resize leads to data ...
- Hi everybody. We experimented big data corruption recently. I've been able to reproduce it and I suspect librbd from ...
- 11:33 AM Bug #18844 (Need More Info): import-diff failed: (33) Numerical argument out of domain - if image...
- There have been several related fixes in v10.2.6 [1], one of them fixed the crashes you observed [2]
So I believe ...
06/21/2017
- 11:43 PM Bug #20054 (Need More Info): librbd memory overhead when used with KVM
- If anyone can provide an example job reproducing this with fio utilizing the direct rbd engine (i.e. take QEMU out-of...
- 11:30 PM Bug #20054: librbd memory overhead when used with KVM
- I am seeing the same issue with my Jewel cluster. Plenty of VMs with over 100% memory overhead.
Can we get the pri...
06/20/2017
- 08:51 PM Bug #12018 (Resolved): rbd and pool quota do not go well together
- 08:50 PM Backport #14824 (Rejected): hammer: rbd and pool quota do not go well together
- 08:50 PM Backport #14824 (New): hammer: rbd and pool quota do not go well together
- Attempted backport https://github.com/ceph/ceph/pull/10871 was closed
06/19/2017
- 10:37 PM Bug #20333 (Rejected): RBD bench in EC pool w/ overwrites overwhelms OSDs
- Sorry, I heard today from Josh that this report involved a vstart cluster and wasn't unique to EC pools in any case.
- 11:58 AM Bug #20333 (Need More Info): RBD bench in EC pool w/ overwrites overwhelms OSDs
- I'm not really sure what RBD can do in this situation. That test was only 16 concurrent IOs in-flight, so when you ha...
- 08:52 PM Backport #20351 (Resolved): kraken: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/16195
- 08:06 PM Backport #19957: jewel: rbd: Lock release requests not honored after watch is re-acquired
- @Jason - is this something you want to tackle?
06/18/2017
- 08:23 AM Bug #20333: RBD bench in EC pool w/ overwrites overwhelms OSDs
- Hopefully the RBD client can do something to be a little friendlier? Tracking OSD throttling improvements in the orig...
- 08:22 AM Bug #20333 (Rejected): RBD bench in EC pool w/ overwrites overwhelms OSDs
- When running "rbd bench-write" using an RBD image stored in an EC pool, the some OSD threads start to timeout and eve...
06/16/2017
- 02:48 PM Bug #20175 (Pending Backport): test_librbd_api.sh fails in upgrade test
- 02:19 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- @Kefu: thanks -- that's a different issue. I'll take care of that today.
- 01:47 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- Jason, the test still fails with this fix. see http://tracker.ceph.com/issues/20175#note-12
- 01:01 PM Bug #20175 (Pending Backport): test_librbd_api.sh fails in upgrade test
06/14/2017
- 05:03 PM Subtask #18786: rbd-mirror A/A: create simple image distribution policy
- Splitting into multiple PRs. https://github.com/ceph/ceph/pull/15691 introduces simple policy for image distribution ...
- 11:24 AM Subtask #18786: rbd-mirror A/A: create simple image distribution policy
- I'm rebasing my branch (https://github.com/vshankar/ceph/commits/rbd-mirror-image-distribution) with master now. Will...
- 01:41 PM Feature #10037 (Resolved): cache-tier: Optimise RBD image removal
- RBD only issues remove ops against all possible objects -- and with object map enabled it only issues them against ob...
- 06:26 AM Feature #10037: cache-tier: Optimise RBD image removal
- I think the proxy changes have fixed this for deletes on the RADOS side; does rbd do anything which would force promo...
- 04:18 AM Bug #18122: unittest_journal TestJournalTrimmer.RemoveObjectsWithOtherClient (intermitent)
- Journal failures belong to rbd.
Also available in: Atom