Activity
From 06/05/2017 to 07/04/2017
07/04/2017
- 08:51 PM Backport #19807 (In Progress): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- 09:04 AM Backport #19807 (Need More Info): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- Waiting for https://github.com/ceph/ceph/pull/14577 to be merged
- 04:38 PM Backport #19336 (Need More Info): kraken: rbd: refuse to use an ec pool that doesn't support over...
- Segmentation fault in unittest_librbd
- 08:53 AM Backport #19336 (In Progress): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 10:04 AM Backport #20016 (Need More Info): kraken: rbd-nbd: kernel reported invalid device size (0, expect...
- Wait until https://github.com/ceph/ceph/pull/14540 is merged
- 10:01 AM Backport #20005 (Need More Info): kraken: Lock release requests not honored after watch is re-acq...
- Non-trivial backport; needs an RBD developer.
- 08:57 AM Backport #19621 (In Progress): kraken: rbd-nbd: add signal handler
- 08:54 AM Backport #19609 (In Progress): kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't ret...
- 08:37 AM Bug #20054: librbd memory overhead when used with KVM
- Please ignore the strikethrough in the previous comment...formatting got me again :-)
- 08:35 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the same test with my RBD image mapped to the hypervisor via rbd-nbd (with ceph caching enabled). When ...
- 08:37 AM Backport #19173 (In Progress): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
07/02/2017
- 04:20 AM Bug #20484: snap can be removed while it is been using by rbd-nbd device
- Solution:
https://github.com/ceph/ceph/pull/16057 - 03:43 AM Bug #20484 (Won't Fix): snap can be removed while it is been using by rbd-nbd device
- I maped a snap to rbd-nbd device by cmd:
# rbd-nbd map rbd/test@snap
but if i remove the snap accidentally, then i ...
06/30/2017
- 12:13 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: just to eliminate the guest OS as a possibility, would it be possible for you to re-run using qemu-tcmu o...
- 06:11 AM Bug #20054: librbd memory overhead when used with KVM
- As I can not reproduce the high memory usage when using fio directly, I created a bug report on the qemu side ([1]). ...
- 01:30 AM Bug #20421 (Resolved): [openstack] cinder backup driver fails due to rbd python API change
06/29/2017
- 06:57 PM Bug #19035 (Fix Under Review): [rbd CLI] map with cephx disabled results in error message
- *PR*: https://github.com/ceph/ceph/pull/16024
- 06:16 PM Bug #19442 (Duplicate): rbd expord-diff aren't counting AioTruncate op correctly.
- Duplicate of tracker issue #19570 -- only an issue in hammer due to changes introduced in infernalis to support deep-...
- 06:11 PM Bug #20168 (Fix Under Review): IO work queue does not process failed lock request
- *PR*: https://github.com/ceph/ceph/pull/15860
- 06:09 PM Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
- Seems like it's a duplicate of issue #18117
06/28/2017
- 04:07 PM Backport #20152 (In Progress): hammer: Potential IO hang if image is flattened while read request...
- https://github.com/ceph/ceph/pull/15980
- 08:33 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the test again with an image without exclusive-lock and object map. I also changed iodepth to 16. I got...
06/27/2017
- 09:09 PM Documentation #20437 (In Progress): Convert downstream Ceph iSCSI documentation for upstream
- 09:06 PM Documentation #20437 (Resolved): Convert downstream Ceph iSCSI documentation for upstream
- Convert all the Ceph iSCSI downstream AsciiDoc (adoc) content to reStructuredText (rst) for upstream consumption.
- 03:28 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: Yes, the rbd engine definitely uses the iodepth setting. If running multiple jobs against the same image,...
- 08:58 AM Bug #20054: librbd memory overhead when used with KVM
- Sorry, again the formatting issue. Here the fio file again:...
- 08:56 AM Bug #20054: librbd memory overhead when used with KVM
- I tried to reproduce this issue with 'fio' (no qemu in the loop) over the weekend, but I was not able to get the same...
- 09:39 AM Bug #20426: some generic options can not be passed by rbd-nbd
- Expect to fix it in this PR: https://github.com/ceph/ceph/pull/14135
- 09:38 AM Bug #20426 (Resolved): some generic options can not be passed by rbd-nbd
- #rbd-nbd --help
Usage: rbd-nbd [options] map <image-or-snap-spec> Map an image to nbd device
unmap ...
06/26/2017
- 11:16 PM Bug #20421 (Fix Under Review): [openstack] cinder backup driver fails due to rbd python API change
- *PR*: https://github.com/ceph/ceph/pull/15932
- 10:50 PM Bug #20421 (Resolved): [openstack] cinder backup driver fails due to rbd python API change
- One such example is that the Cinder Backup driver directly creates an RBD exception, but under commit ac580718e9 the ...
06/25/2017
06/23/2017
- 03:50 PM Bug #20393: IO hang in libvirt/rbd VMs...
- A 1 second tail latency on an overloaded cluster can occur. Can you repeat 60-120 second stalled IO? As it is, these ...
- 03:18 PM Bug #20393: IO hang in libvirt/rbd VMs...
- Info of rbd image:
root@cephproxy01:~# rbd -p vms info c9c5db8e-7502-4acc-b670-af18bdf89886_disk
rbd image 'c9c5db8... - 02:37 PM Bug #20393: IO hang in libvirt/rbd VMs...
- 4 files detailing several occurrences uploaded via ceph-post-file to 55ee8d8a-4058-48c5-9586-39c600465e9d.
Note hung... - 01:53 PM Bug #20393 (Closed): IO hang in libvirt/rbd VMs...
- During heavy IO, processes/threads in libvirt-qemu VMs backed by rbd volumes hang, exceeding hung_task_timeout_secs t...
- 12:01 PM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Perfect, thanks.
- 10:13 AM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Hi Jason, thanks for the answer.
Yes I'm very aware this bug report lacks precision. In fact I was in the middle of ... - 12:07 AM Bug #20388 (Need More Info): combination of kvm using librbd from kraken and online resize leads ...
- We will need a repeatable reproducer to try and fix it -- or debug-level logs from the affected librbd instance.
06/22/2017
- 09:44 PM Bug #20388: combination of kvm using librbd from kraken and online resize leads to data corruption
- Just to add some confusion, I'm unable to reproduce this issue on a ubuntu-based machine with librbd from kraken.
So... - 08:38 PM Bug #20388 (Closed): combination of kvm using librbd from kraken and online resize leads to data ...
- Hi everybody. We experimented big data corruption recently. I've been able to reproduce it and I suspect librbd from ...
- 11:33 AM Bug #18844 (Need More Info): import-diff failed: (33) Numerical argument out of domain - if image...
- There have been several related fixes in v10.2.6 [1], one of them fixed the crashes you observed [2]
So I believe ...
06/21/2017
- 11:43 PM Bug #20054 (Need More Info): librbd memory overhead when used with KVM
- If anyone can provide an example job reproducing this with fio utilizing the direct rbd engine (i.e. take QEMU out-of...
- 11:30 PM Bug #20054: librbd memory overhead when used with KVM
- I am seeing the same issue with my Jewel cluster. Plenty of VMs with over 100% memory overhead.
Can we get the pri...
06/20/2017
- 08:51 PM Bug #12018 (Resolved): rbd and pool quota do not go well together
- 08:50 PM Backport #14824 (Rejected): hammer: rbd and pool quota do not go well together
- 08:50 PM Backport #14824 (New): hammer: rbd and pool quota do not go well together
- Attempted backport https://github.com/ceph/ceph/pull/10871 was closed
06/19/2017
- 10:37 PM Bug #20333 (Rejected): RBD bench in EC pool w/ overwrites overwhelms OSDs
- Sorry, I heard today from Josh that this report involved a vstart cluster and wasn't unique to EC pools in any case.
- 11:58 AM Bug #20333 (Need More Info): RBD bench in EC pool w/ overwrites overwhelms OSDs
- I'm not really sure what RBD can do in this situation. That test was only 16 concurrent IOs in-flight, so when you ha...
- 08:52 PM Backport #20351 (Resolved): kraken: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/16195
- 08:06 PM Backport #19957: jewel: rbd: Lock release requests not honored after watch is re-acquired
- @Jason - is this something you want to tackle?
06/18/2017
- 08:23 AM Bug #20333: RBD bench in EC pool w/ overwrites overwhelms OSDs
- Hopefully the RBD client can do something to be a little friendlier? Tracking OSD throttling improvements in the orig...
- 08:22 AM Bug #20333 (Rejected): RBD bench in EC pool w/ overwrites overwhelms OSDs
- When running "rbd bench-write" using an RBD image stored in an EC pool, the some OSD threads start to timeout and eve...
06/16/2017
- 02:48 PM Bug #20175 (Pending Backport): test_librbd_api.sh fails in upgrade test
- 02:19 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- @Kefu: thanks -- that's a different issue. I'll take care of that today.
- 01:47 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- Jason, the test still fails with this fix. see http://tracker.ceph.com/issues/20175#note-12
- 01:01 PM Bug #20175 (Pending Backport): test_librbd_api.sh fails in upgrade test
06/14/2017
- 05:03 PM Subtask #18786: rbd-mirror A/A: create simple image distribution policy
- Splitting into multiple PRs. https://github.com/ceph/ceph/pull/15691 introduces simple policy for image distribution ...
- 11:24 AM Subtask #18786: rbd-mirror A/A: create simple image distribution policy
- I'm rebasing my branch (https://github.com/vshankar/ceph/commits/rbd-mirror-image-distribution) with master now. Will...
- 01:41 PM Feature #10037 (Resolved): cache-tier: Optimise RBD image removal
- RBD only issues remove ops against all possible objects -- and with object map enabled it only issues them against ob...
- 06:26 AM Feature #10037: cache-tier: Optimise RBD image removal
- I think the proxy changes have fixed this for deletes on the RADOS side; does rbd do anything which would force promo...
- 04:18 AM Bug #18122: unittest_journal TestJournalTrimmer.RemoveObjectsWithOtherClient (intermitent)
- Journal failures belong to rbd.
06/13/2017
- 01:51 AM Bug #20175: test_librbd_api.sh fails in upgrade test
- Jason, could you help take a look? by inspecting @qa/suites/upgrade/client-upgrade/jewel-client-x/basic@, i think it'...
- 01:47 AM Bug #20175: test_librbd_api.sh fails in upgrade test
- tested at http://qa-proxy.ceph.com/teuthology/kchai-2017-06-12_12:19:18-upgrade-wip-20175-kefu---basic-mira/1279912/t...
06/12/2017
- 08:34 PM Backport #20267 (Resolved): jewel: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- https://github.com/ceph/ceph/pull/16296
- 08:34 PM Backport #20266 (Resolved): kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- https://github.com/ceph/ceph/pull/16187
- 08:34 PM Backport #20265 (Resolved): jewel: [cli] ensure positional arguments exist before casting
- https://github.com/ceph/ceph/pull/16295
- 08:34 PM Backport #20264 (Resolved): kraken: [cli] ensure positional arguments exist before casting
- https://github.com/ceph/ceph/pull/16186
- 07:26 AM Feature #18984: RFE: let rbd export write directly to a block device
- Jason Dillaman wrote:
> Couldn't you just run "rbd export <image-spec> - | dd of=/dev/someblockdevice" and achieve t... - 07:25 AM Feature #18984: RFE: let rbd export write directly to a block device
- Mykola Golub wrote:
> Note, write serialization is not the only difference between writing to a file and to stdout. ...
06/11/2017
- 08:48 PM Feature #18984: RFE: let rbd export write directly to a block device
- Couldn't you just run "rbd export <image-spec> - | dd of=/dev/someblockdevice" and achieve the desired outcome?
- 01:58 PM Feature #18984: RFE: let rbd export write directly to a block device
- Note, write serialization is not the only difference between writing to a file and to stdout. Another difference is t...
06/10/2017
- 06:31 PM Backport #19611 (In Progress): kraken: Issues with C API image metadata retrieval functions
- 05:11 PM Bug #19942 (Duplicate): "[ FAILED ] TestLibRBD.Metadata" in upgrade:client-upgrade-kraken-distr...
- 04:59 PM Bug #19942: "[ FAILED ] TestLibRBD.Metadata" in upgrade:client-upgrade-kraken-distro-basic-smithi
- This tests shows a real bug that was fixed in master #19588 and the test was extended to catch it then. I believe it ...
- 03:24 PM Bug #20223 (Resolved): rbd.ImageNotFound: __init__() takes exactly 3 positional arguments, 1 given
06/09/2017
- 04:07 PM Bug #20175 (Fix Under Review): test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/15602
- 03:29 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- because the user applications do not link against libcommon, i think it'd be fine to backport the change of @prio_adj...
- 03:23 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- because the layout of @PerfCounters@ was changed in luminous: we added a new field of @ @prio_adjust@ to it. so the ...
- 12:59 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- ...
- 12:09 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- ...
- 11:34 AM Bug #20175: test_librbd_api.sh fails in upgrade test
- ...
- 03:26 PM Bug #19889 (Need More Info): rbd/compatibility: rbd import fails with Jewel client, Kraken OSDs (...
06/08/2017
- 02:53 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- it's not relevant. and jewel's "ceph_test_librbd_api" crashed in a different way when dynamically linked against libr...
- 01:32 PM Bug #20223 (Fix Under Review): rbd.ImageNotFound: __init__() takes exactly 3 positional arguments...
- *PR*: https://github.com/ceph/ceph/pull/15574
- 01:29 PM Bug #20223 (In Progress): rbd.ImageNotFound: __init__() takes exactly 3 positional arguments, 1 g...
- 01:29 PM Bug #20223: rbd.ImageNotFound: __init__() takes exactly 3 positional arguments, 1 given
- OpenAttic serializes the exception via multiprocessing pipes. The new OSError exception cannot be properly serialized...
- 01:25 PM Bug #20223: rbd.ImageNotFound: __init__() takes exactly 3 positional arguments, 1 given
- <jdillaman> sebastian-w: I wonder if the issue is that the new rbd.OSError is not picklable for the multiprocessing.P...
- 11:46 AM Bug #20223 (Resolved): rbd.ImageNotFound: __init__() takes exactly 3 positional arguments, 1 given
- While investigating https://tracker.openattic.org/browse/OP-2311, I came across this lines:...
06/07/2017
- 01:48 PM Feature #3499 (Resolved): qemu-rbd: support bdrv_has_zero_init
- Addressed in upstream commit 3ac21627
- 01:34 PM Feature #18917: rbd: show the latest snapshot in rbd info
- Note that the most recent snapshot might not be what the HEAD revision of the image is based upon if a rollback was u...
- 06:25 AM Subtask #18789 (Resolved): rbd-mirror A/A: coordinate image syncs with leader
06/06/2017
- 10:49 AM Bug #20185 (Pending Backport): [cli] ensure positional arguments exist before casting
- 10:49 AM Bug #20182 (Pending Backport): [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 07:39 AM Bug #18963 (Resolved): rbd-mirror: forced failover does not function when peer is unreachable
06/05/2017
- 05:19 PM Bug #20185 (Fix Under Review): [cli] ensure positional arguments exist before casting
- *PR*: https://github.com/ceph/ceph/pull/15492
- 05:01 PM Bug #20185 (Resolved): [cli] ensure positional arguments exist before casting
- For example: "rbd feature enable --image xyz" will crash since the feature name positional was not specified.
- 03:16 PM Backport #20023 (In Progress): jewel: rbd-mirror replay fails on attempting to reclaim data to lo...
- 12:32 PM Backport #20023 (New): jewel: rbd-mirror replay fails on attempting to reclaim data to local site...
- Jason agreed to do this one.
- 01:51 PM Backport #20022 (In Progress): kraken: rbd-mirror replay fails on attempting to reclaim data to l...
- 12:41 PM Support #20183 (New): Ceph RBD image-feature
- How can I run an image with all the features?
I am running:
cephuser@ceph01u:~$ ceph -v
ceph version 11.2.0 (f... - 12:33 PM Bug #19811: rbd-mirror replay fails on attempting to reclaim data to local site (LS) from distant...
- OK, #20023 reassigned to Jason.
- 11:41 AM Bug #19811: rbd-mirror replay fails on attempting to reclaim data to local site (LS) from distant...
- @Nathan: feel free to re-assign this one to me and I'll make the necessary changes.
- 12:31 PM Bug #19907 (Resolved): rbd-mirror: admin socket path names collision
- 11:40 AM Bug #19907: rbd-mirror: admin socket path names collision
- @Nathan: since this is more of a nice-to-have, I am also fine just dropping the need for a backport to jewel (and kra...
- 12:31 PM Backport #20009 (Rejected): jewel: rbd-mirror: admin socket path names collision
- Backport is complicated and not worth the effort for a mere "nice-to-have" backport.
- 12:31 PM Backport #20008 (Rejected): kraken: rbd-mirror: admin socket path names collision
- Backport is complicated and not worth the effort for a mere "nice-to-have" backport.
- 12:18 PM Bug #20182 (Fix Under Review): [api] is_exclusive_lock_owner shouldn't return -EBUSY
- *PR*: https://github.com/ceph/ceph/pull/15483
- 12:06 PM Bug #20182 (Resolved): [api] is_exclusive_lock_owner shouldn't return -EBUSY
- This error code indicates that another client owns the exclusive lock. Instead, it should return 0 with the boolean s...
Also available in: Atom