Activity
From 06/27/2017 to 07/26/2017
07/26/2017
- 01:07 PM Bug #20743 (Can't reproduce): TestClsRbd.group_image_list failed
- 02:52 AM Documentation #20702 (Resolved): [cli] document new trash commands
07/25/2017
- 12:22 AM Feature #20762 (New): rbdmap should support other block devices
- Right now "rbd nbd XYZ" is supported and eventually "rbd ggate XYZ"
07/23/2017
- 01:04 PM Bug #20580: rbd map should warn when creating duplicate devices for the same image
- PR: https://github.com/ceph/ceph/pull/16517
07/22/2017
- 03:20 AM Bug #20743 (Can't reproduce): TestClsRbd.group_image_list failed
- ...
- 02:25 AM Documentation #20702: [cli] document new trash commands
- https://github.com/ceph/ceph/pull/16498
07/21/2017
- 05:25 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- As far as I know, the only thing left is to update in the documentation is the location of where the Ceph iSCSI RPM p...
- 02:19 PM Cleanup #20737 (Resolved): [config] switch to new config option getter methods
07/20/2017
- 03:06 PM Documentation #20702 (Resolved): [cli] document new trash commands
- 03:04 PM Documentation #20701 (Resolved): [rbd-mirror] update mirroring docs for Luminous
- * delayed replication configuration
* HA rbd-mirror daemons
* unique user ids per daemon - 01:06 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: I need a reproducer for librbd to ensure that we are not chasing a QEMU issue.
- 10:35 AM Bug #20054: librbd memory overhead when used with KVM
- Any news on this or anything I can do?
07/19/2017
- 12:38 PM Cleanup #15306 (Resolved): ImageWatcher should derive from ObjectWatcher
- 12:36 PM Cleanup #19274 (Rejected): src/tools/rbd/action/Kernel.cc: ceph.git does not exist
- ...
- 12:33 PM Cleanup #16990 (In Progress): 'rbd image-meta remove' of missing key does not return error
- *PR*: https://github.com/ceph/ceph/pull/16393
- 12:30 PM Bug #19081 (Resolved): rbd: refuse to use an ec pool that doesn't support overwrites
- 12:29 PM Backport #19336 (Resolved): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 12:29 PM Bug #19597 (Resolved): [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value
- 12:29 PM Backport #19609 (Resolved): kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return...
- 12:27 PM Backport #20154 (Resolved): kraken: Potential IO hang if image is flattened while read request is...
- 12:26 PM Backport #20266 (Resolved): kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 12:23 PM Backport #20351 (Resolved): kraken: test_librbd_api.sh fails in upgrade test
- 10:57 AM Backport #20022 (Resolved): kraken: rbd-mirror replay fails on attempting to reclaim data to loca...
- 10:52 AM Bug #17951 (Resolved): AdminSocket::bind_and_listen failed after rbd-nbd mapping
- 10:52 AM Backport #18970 (Resolved): kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- 10:50 AM Feature #18335 (Resolved): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped correctly
- 10:50 AM Backport #18910 (Resolved): kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped c...
- 01:11 AM Bug #18731 (Resolved): [teuthology] rbd-mirror tests sporadically fail due to pid file error
- The teuthology "rbd-mirror" task did not run the daemon in the foreground -- so it was never stopped by the test harn...
- 01:10 AM Bug #18435 (New): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:07 AM Bug #17928 (Won't Fix): import-diff doesn't work as what I expected
- 01:05 AM Bug #15404 (Can't reproduce): KVM: terminate called after throwing an instance of 'ceph::buffer::...
- 01:05 AM Bug #15290 (Can't reproduce): rbd journal
- 12:38 AM Bug #20656 (Can't reproduce): [rbd-mirror] local images are not properly starting image replayer
- 12:35 AM Bug #20654 (Can't reproduce): [rbd-mirror] newly replicated mirrored images do not send rbd_mirro...
07/18/2017
- 08:54 PM Bug #20643 (Resolved): [cls] trash_list should take start_after / max_return parameters
- 07:15 PM Bug #20654 (In Progress): [rbd-mirror] newly replicated mirrored images do not send rbd_mirroring...
- 07:04 PM Bug #20655 (Fix Under Review): [rbd-mirror] demoting a primary image may result in the image bein...
- *PR*: https://github.com/ceph/ceph/pull/16398
- 02:59 PM Bug #20655 (In Progress): [rbd-mirror] demoting a primary image may result in the image being del...
07/17/2017
- 08:36 PM Bug #20656 (Can't reproduce): [rbd-mirror] local images are not properly starting image replayer
- These should be started for local-only images so that we can retrieve up-to-date status for dashboard monitoring.
- 08:23 PM Bug #20655 (Resolved): [rbd-mirror] demoting a primary image may result in the image being deleted
- If a primary image is not replicated to a peer, demoting the image and restarting the rbd-mirror daemon can result in...
- 08:22 PM Bug #20654 (Can't reproduce): [rbd-mirror] newly replicated mirrored images do not send rbd_mirro...
- This causes rbd-mirror daemon's pool watcher to fail to detect the new local image.
- 02:26 PM Bug #20644 (In Progress): [rbd-mirror] assertion failure when mirrored pool is removed
- 02:15 PM Bug #20643 (Fix Under Review): [cls] trash_list should take start_after / max_return parameters
- *PR*: https://github.com/ceph/ceph/pull/16372
- 01:29 PM Bug #20643 (In Progress): [cls] trash_list should take start_after / max_return parameters
- 12:20 PM Cleanup #16990: 'rbd image-meta remove' of missing key does not return error
- # rbd --version
ceph version 12.0.0-1468-g0214e06 (0214e063056cd0ce3a80cea13087f68be8017b20)
# rbd image-meta remov... - 11:48 AM Feature #17178 (In Progress): [OpenStack Glance] update store driver to sparsify rbd-backed images
- *Upstream review*: https://review.openstack.org/#/c/430641/33
07/15/2017
- 03:01 AM Bug #20644 (Resolved): [rbd-mirror] assertion failure when mirrored pool is removed
- ...
- 01:15 AM Bug #20643 (Resolved): [cls] trash_list should take start_after / max_return parameters
- This should be fixed prior to the Luminous release
07/14/2017
- 08:19 PM Backport #20635 (In Progress): jewel: [test] rbd-mirror teuthology task doesn't start daemon in f...
- 08:10 PM Backport #20635 (Resolved): jewel: [test] rbd-mirror teuthology task doesn't start daemon in fore...
- https://github.com/ceph/ceph/pull/16343
- 08:17 PM Backport #20634 (In Progress): kraken: [test] rbd-mirror teuthology task doesn't start daemon in ...
- 08:10 PM Backport #20634 (Resolved): kraken: [test] rbd-mirror teuthology task doesn't start daemon in for...
- https://github.com/ceph/ceph/pull/16342
- 08:10 PM Backport #20637 (Resolved): jewel: rbd-mirror: cluster watcher should ignore -EPERM errors agains...
- https://github.com/ceph/ceph/pull/21225
- 08:10 PM Backport #20636 (Rejected): kraken: rbd-mirror: cluster watcher should ignore -EPERM errors again...
- 03:24 PM Bug #20630 (Pending Backport): [test] rbd-mirror teuthology task doesn't start daemon in foregrou...
- 03:17 PM Bug #20630 (Fix Under Review): [test] rbd-mirror teuthology task doesn't start daemon in foregrou...
- *PR*: https://github.com/ceph/ceph/pull/16340
- 02:31 PM Bug #20630 (Resolved): [test] rbd-mirror teuthology task doesn't start daemon in foreground mode
- This prevents the daemon-helper from cleaning up the daemon at the end of the test.
07/13/2017
- 09:54 AM Bug #20571 (Pending Backport): rbd-mirror: cluster watcher should ignore -EPERM errors against re...
- 07:39 AM Bug #20580: rbd map should warn when creating duplicate devices for the same image
- I think the same image should not be mapped more than once. I modified the kernel code(drivers/block/rbd.c), then rbd...
- 01:30 AM Backport #20515: jewel: IO work queue does not process failed lock request
- @Nathan: you have a very keen nose -- you are correct it will not be trivial.
07/12/2017
- 06:50 PM Backport #20517 (In Progress): kraken: [rbd CLI] map with cephx disabled results in error message
- 06:48 PM Backport #20518 (In Progress): jewel: [rbd CLI] map with cephx disabled results in error message
- 06:47 PM Backport #20515 (Need More Info): jewel: IO work queue does not process failed lock request
- @Jason - I smell non-triviality here
- 06:45 PM Backport #20267 (In Progress): jewel: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 06:44 PM Backport #20265 (In Progress): jewel: [cli] ensure positional arguments exist before casting
- 03:38 PM Bug #20484 (Fix Under Review): snap can be removed while it is been using by rbd-nbd device
- 01:00 PM Feature #15321 (Resolved): Support asynchronous v2 image creation/cloning
- 01:00 PM Backport #17008 (Closed): jewel: Support asynchronous v2 image creation/cloning
- @Nathan: I think we should just close this backport -- it will be a huge change to backport to Jewel.
- 10:24 AM Backport #17008: jewel: Support asynchronous v2 image creation/cloning
- Waiting for https://github.com/ceph/ceph/pull/10896 to be merged - still open.
- 10:21 AM Backport #18137 (In Progress): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- 10:14 AM Backport #18137 (New): jewel: rbd-mirror: image sync should send NOCACHE advise flag
- https://github.com/ceph/ceph/pull/12043 was closed (sparse reads will not be backported for now)
- 10:12 AM Backport #18500: jewel: rbd-mirror: potential race mirroring cloned image
- No change - backport is still non-trivial
- 10:09 AM Backport #18704: jewel: Prevent librbd from blacklisting the in-use librados client
- @Jason: https://github.com/ceph/ceph/pull/12890 has been merged, but the backport is still non-trivial.
- 10:04 AM Backport #19957: jewel: rbd: Lock release requests not honored after watch is re-acquired
- non-trivial backport; needs an RBD developer
07/11/2017
- 08:41 PM Bug #19057 (Won't Fix): krbd suite does not run on hammer (rbd task fails with "No route to host")
- I think we can ignore hammer krbd now?
- 05:45 PM Bug #20571 (Fix Under Review): rbd-mirror: cluster watcher should ignore -EPERM errors against re...
- *PR*: https://github.com/ceph/ceph/pull/16264
- 05:21 PM Bug #20571 (In Progress): rbd-mirror: cluster watcher should ignore -EPERM errors against reading...
- 02:58 PM Bug #20571 (Resolved): rbd-mirror: cluster watcher should ignore -EPERM errors against reading 'r...
- ...
- 03:56 PM Bug #20580 (Resolved): rbd map should warn when creating duplicate devices for the same image
- ...
- 05:34 AM Bug #20567 (Resolved): rbd-mirror do not support ec pools when the primary image use ec data pool.
- The primary image use ec data pool. But when the rbd-mirror create the mirror image, the data pool feature lost and a...
07/08/2017
- 05:59 PM Backport #18324 (Closed): kraken: JournalMetadata flooding with errors when being blacklisted
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18322 (Closed): kraken: librbd::ResizeRequest: failed to update image header: (16) Devi...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18319 (Closed): kraken: rbd status: json format has duplicated/overwritten key
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18289 (Closed): kraken: objectmap does not show object existence correctly
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:58 PM Backport #18279 (Closed): kraken: RBD diff got SIGABRT with "--whole-object" for RBD whose parent...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
- 05:57 PM Backport #18275 (Closed): kraken: rbd-nbd: invalid error code for "failed to read nbd request" me...
- Changing status to "Closed" so it doesn't turn up in the "missing target version" saved search.
07/07/2017
- 07:37 AM Backport #20532 (In Progress): jewel: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/15602
- 01:29 AM Backport #20351 (Fix Under Review): kraken: test_librbd_api.sh fails in upgrade test
07/06/2017
- 11:23 PM Backport #20351 (In Progress): kraken: test_librbd_api.sh fails in upgrade test
- 09:50 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- 09:50 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- Closed the previous pull request[1] and created a new pull request[2].
[1] - https://github.com/ceph/ceph/pull/161... - 02:03 AM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- https://github.com/ceph/ceph/pull/16146
- 07:20 PM Backport #20532 (Resolved): jewel: test_librbd_api.sh fails in upgrade test
- https://github.com/ceph/ceph/pull/15602
- 07:17 PM Bug #20175: test_librbd_api.sh fails in upgrade test
- *master PR*: https://github.com/ceph/ceph/pull/15611
- 06:31 PM Backport #20266 (In Progress): kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY
- 06:23 PM Backport #20264 (In Progress): kraken: [cli] ensure positional arguments exist before casting
- 05:58 PM Backport #20154 (In Progress): kraken: Potential IO hang if image is flattened while read request...
- 05:39 PM Backport #19336 (In Progress): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 03:47 PM Bug #18447 (Resolved): Potential race when removing two-way mirroring image
- 03:47 PM Backport #19807 (Resolved): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- 03:47 PM Backport #19227 (Resolved): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 03:46 PM Backport #18555 (Resolved): kraken: rbd: Potential race when removing two-way mirroring image
07/05/2017
- 07:01 PM Bug #20509 (Closed): lrbd iscsi ceph::buffer::end_of_buffer
- 05:49 PM Bug #20509: lrbd iscsi ceph::buffer::end_of_buffer
- Indeed, not the right place. The right place in this case would be https://bugzilla.suse.com/enter_bug.cgi?product=SU...
- 04:33 PM Bug #20509 (Closed): lrbd iscsi ceph::buffer::end_of_buffer
- Hi ceph-team,
I am not sure if I created the ticket in the right place but we have a problem with our ceph lrbd --... - 05:44 PM Backport #20518 (Resolved): jewel: [rbd CLI] map with cephx disabled results in error message
- https://github.com/ceph/ceph/pull/16297
- 05:44 PM Backport #20517 (Resolved): kraken: [rbd CLI] map with cephx disabled results in error message
- https://github.com/ceph/ceph/pull/16298
- 05:43 PM Backport #20515 (Resolved): jewel: IO work queue does not process failed lock request
- https://github.com/ceph/ceph/pull/17402
- 05:43 PM Backport #20514 (Rejected): kraken: IO work queue does not process failed lock request
- 03:52 PM Feature #19349 (Resolved): rbd-nbd: add signal handler
- 03:52 PM Backport #19621 (Resolved): kraken: rbd-nbd: add signal handler
- 03:51 PM Bug #19588 (Resolved): Issues with C API image metadata retrieval functions
- 03:50 PM Backport #19611 (Resolved): kraken: Issues with C API image metadata retrieval functions
- 03:48 PM Backport #19794 (Resolved): kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_own...
- 03:30 PM Backport #19174 (In Progress): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- 03:27 PM Backport #19173 (Resolved): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
- 02:26 PM Documentation #20437: Convert downstream Ceph iSCSI documentation for upstream
- Push branch wip-doc-20437 to my fork[1] of the Ceph project.
[1] - https://github.com/ritz303/ceph/tree/wip-doc-20437 - 12:09 PM Cleanup #17891 (In Progress): Creation of rbd image with format 1 should be disallowed
- 11:54 AM Bug #20168 (Pending Backport): IO work queue does not process failed lock request
- 11:54 AM Bug #19035 (Pending Backport): [rbd CLI] map with cephx disabled results in error message
- 07:37 AM Backport #19872 (Resolved): kraken: [rbd-mirror] failover and failback of unmodified image result...
- 07:37 AM Backport #19833 (Resolved): kraken: Cannot delete some snapshots after upgrade from jewel to kraken
- 07:31 AM Bug #18653 (Resolved): Improve compatibility between librbd + krbd for the data pool
- 07:31 AM Backport #18771 (Resolved): kraken: rbd: Improve compatibility between librbd + krbd for the data...
07/04/2017
- 08:51 PM Backport #19807 (In Progress): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- 09:04 AM Backport #19807 (Need More Info): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- Waiting for https://github.com/ceph/ceph/pull/14577 to be merged
- 04:38 PM Backport #19336 (Need More Info): kraken: rbd: refuse to use an ec pool that doesn't support over...
- Segmentation fault in unittest_librbd
- 08:53 AM Backport #19336 (In Progress): kraken: rbd: refuse to use an ec pool that doesn't support overwrites
- 10:04 AM Backport #20016 (Need More Info): kraken: rbd-nbd: kernel reported invalid device size (0, expect...
- Wait until https://github.com/ceph/ceph/pull/14540 is merged
- 10:01 AM Backport #20005 (Need More Info): kraken: Lock release requests not honored after watch is re-acq...
- Non-trivial backport; needs an RBD developer.
- 08:57 AM Backport #19621 (In Progress): kraken: rbd-nbd: add signal handler
- 08:54 AM Backport #19609 (In Progress): kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't ret...
- 08:37 AM Bug #20054: librbd memory overhead when used with KVM
- Please ignore the strikethrough in the previous comment...formatting got me again :-)
- 08:35 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the same test with my RBD image mapped to the hypervisor via rbd-nbd (with ceph caching enabled). When ...
- 08:37 AM Backport #19173 (In Progress): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
07/02/2017
- 04:20 AM Bug #20484: snap can be removed while it is been using by rbd-nbd device
- Solution:
https://github.com/ceph/ceph/pull/16057 - 03:43 AM Bug #20484 (Won't Fix): snap can be removed while it is been using by rbd-nbd device
- I maped a snap to rbd-nbd device by cmd:
# rbd-nbd map rbd/test@snap
but if i remove the snap accidentally, then i ...
06/30/2017
- 12:13 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: just to eliminate the guest OS as a possibility, would it be possible for you to re-run using qemu-tcmu o...
- 06:11 AM Bug #20054: librbd memory overhead when used with KVM
- As I can not reproduce the high memory usage when using fio directly, I created a bug report on the qemu side ([1]). ...
- 01:30 AM Bug #20421 (Resolved): [openstack] cinder backup driver fails due to rbd python API change
06/29/2017
- 06:57 PM Bug #19035 (Fix Under Review): [rbd CLI] map with cephx disabled results in error message
- *PR*: https://github.com/ceph/ceph/pull/16024
- 06:16 PM Bug #19442 (Duplicate): rbd expord-diff aren't counting AioTruncate op correctly.
- Duplicate of tracker issue #19570 -- only an issue in hammer due to changes introduced in infernalis to support deep-...
- 06:11 PM Bug #20168 (Fix Under Review): IO work queue does not process failed lock request
- *PR*: https://github.com/ceph/ceph/pull/15860
- 06:09 PM Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
- Seems like it's a duplicate of issue #18117
06/28/2017
- 04:07 PM Backport #20152 (In Progress): hammer: Potential IO hang if image is flattened while read request...
- https://github.com/ceph/ceph/pull/15980
- 08:33 AM Bug #20054: librbd memory overhead when used with KVM
- @Jason: I did the test again with an image without exclusive-lock and object map. I also changed iodepth to 16. I got...
06/27/2017
- 09:09 PM Documentation #20437 (In Progress): Convert downstream Ceph iSCSI documentation for upstream
- 09:06 PM Documentation #20437 (Resolved): Convert downstream Ceph iSCSI documentation for upstream
- Convert all the Ceph iSCSI downstream AsciiDoc (adoc) content to reStructuredText (rst) for upstream consumption.
- 03:28 PM Bug #20054: librbd memory overhead when used with KVM
- @Sebastian: Yes, the rbd engine definitely uses the iodepth setting. If running multiple jobs against the same image,...
- 08:58 AM Bug #20054: librbd memory overhead when used with KVM
- Sorry, again the formatting issue. Here the fio file again:...
- 08:56 AM Bug #20054: librbd memory overhead when used with KVM
- I tried to reproduce this issue with 'fio' (no qemu in the loop) over the weekend, but I was not able to get the same...
- 09:39 AM Bug #20426: some generic options can not be passed by rbd-nbd
- Expect to fix it in this PR: https://github.com/ceph/ceph/pull/14135
- 09:38 AM Bug #20426 (Resolved): some generic options can not be passed by rbd-nbd
- #rbd-nbd --help
Usage: rbd-nbd [options] map <image-or-snap-spec> Map an image to nbd device
unmap ...
Also available in: Atom