Activity
From 12/10/2017 to 01/08/2018
01/08/2018
- 11:41 PM Backport #22593: luminous: [ FAILED ] TestLibRBD.RenameViaLockOwner
- I'm on it.
01/06/2018
- 09:10 AM Feature #22605 (Resolved): Create per-object-prefix performance counters
- The problem:
There is no way to collect per RBD-image runtime statistics (IOPS, MB/s and so on ).
Solution:
1. I...
01/05/2018
- 01:49 PM Backport #21691 (In Progress): jewel: [qa] rbd_mirror_helpers.sh request_resync_image function sa...
- 01:48 PM Backport #21690 (In Progress): luminous: [qa] rbd_mirror_helpers.sh request_resync_image function...
- 01:47 PM Backport #21688 (In Progress): luminous: Possible deadlock in 'list_children' when refresh is req...
- 01:45 PM Bug #21353 (Resolved): upgrade to luminous results in seemingly corrupt images in QEMU
- doc/release-notes.rst is maintained in master only - dropping the backport
- 01:39 PM Bug #21559 (Resolved): [rbd-mirror] resync isn't properly deleting non-primary image
- 01:39 PM Backport #21640 (Resolved): luminous: [rbd-mirror] resync isn't properly deleting non-primary image
- 01:38 PM Bug #21567 (Resolved): rbd does not delete snaps in (ec) data pool
- 01:38 PM Backport #21639 (Resolved): luminous: rbd does not delete snaps in (ec) data pool
- 01:22 PM Backport #21642 (In Progress): jewel: rbd ls -l crashes with SIGABRT
- 01:20 PM Backport #21641 (In Progress): luminous: rbd ls -l crashes with SIGABRT
- 12:20 PM Backport #22594 (Resolved): jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner
- https://github.com/ceph/ceph/pull/19855
- 12:20 PM Backport #22593 (Resolved): luminous: [ FAILED ] TestLibRBD.RenameViaLockOwner
- https://github.com/ceph/ceph/pull/19853
- 12:16 PM Backport #22578 (Resolved): jewel: [test] rbd-mirror split brain test case can have a false-posit...
- https://github.com/ceph/ceph/pull/21205
- 12:16 PM Backport #22577 (Resolved): luminous: [test] rbd-mirror split brain test case can have a false-po...
- https://github.com/ceph/ceph/pull/20205
01/03/2018
- 12:12 PM Feature #18480 (Resolved): rbd-mirror: support cloning an image from a non-primary snapshot
12/30/2017
- 08:13 PM Feature #18480 (Fix Under Review): rbd-mirror: support cloning an image from a non-primary snapshot
- *PR*: https://github.com/ceph/ceph/pull/19724
12/28/2017
- 08:20 PM Feature #20762 (Fix Under Review): rbdmap should support other block devices
- PR: https://github.com/ceph/ceph/pull/19711
- 02:12 PM Feature #20762 (In Progress): rbdmap should support other block devices
12/24/2017
- 07:10 PM Documentation #22533: [iscsi-gw]Incorrect package version is specified
- I meant that there are no packages to install.
- 06:14 PM Bug #18435 (Pending Backport): [ FAILED ] TestLibRBD.RenameViaLockOwner
12/23/2017
- 07:24 PM Feature #22333 (Resolved): rbd-nbd: support optionally setting the device timeout
- 05:18 PM Documentation #22533 (Resolved): [iscsi-gw]Incorrect package version is specified
- It is stated in docs/rbd/iscsi-target-cli.rst that *tcmu-runner-1.3.0 or newer package* should be installed. On the o...
12/22/2017
- 04:32 AM Backport #22498 (In Progress): jewel: [rbd-mirror] new pools might not be detected
- 04:30 AM Backport #22498: jewel: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19644
- 02:36 AM Backport #22498: jewel: [rbd-mirror] new pools might not be detected
- I'm working on it
12/21/2017
- 10:01 AM Backport #22497: luminous: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19625
12/20/2017
- 09:56 PM Bug #18435 (Fix Under Review): [ FAILED ] TestLibRBD.RenameViaLockOwner
- *PR*: https://github.com/ceph/ceph/pull/19618
- 09:52 PM Bug #18435 (In Progress): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 09:05 PM Bug #22363 (Need More Info): Watchers are lost on active RBD image with running client
- Need a gcore dump of the affected process or aggressive logging enabled (debug ms = 1, debug objecter = 20).
- 09:01 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
- 01:46 PM Bug #22485 (Pending Backport): [test] rbd-mirror split brain test case can have a false-positive ...
- 11:53 AM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19644
- 11:53 AM Backport #22497 (Resolved): luminous: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19625
12/19/2017
- 09:31 PM Bug #22485 (Fix Under Review): [test] rbd-mirror split brain test case can have a false-positive ...
- *PR*: https://github.com/ceph/ceph/pull/19604
- 08:50 PM Bug #22485 (Resolved): [test] rbd-mirror split brain test case can have a false-positive failure ...
- The "split-brain" test under teuthology has two running rbd-mirror daemons (one for each cluster) which can result in...
12/18/2017
- 03:09 PM Bug #20054: librbd memory overhead when used with KVM
- Christian Theune wrote:
> @Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit yo... - 02:44 PM Feature #18480 (In Progress): rbd-mirror: support cloning an image from a non-primary snapshot
12/17/2017
- 09:03 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
12/16/2017
12/15/2017
- 08:24 PM Backport #22454: luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19554
- 11:47 AM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19554
- 08:06 PM Bug #22461 (Fix Under Review): [rbd-mirror] new pools might not be detected
- *PR*: https://github.com/ceph/ceph/pull/19550
- 07:49 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
- The 'Rados::pool_list2' command will not necessarily ask for the latest OSD map, so the list it returns might be out-...
- 02:08 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- There is nothing RBD can do -- you can perhaps open a new ticket against RADOS to optimize snap rollback directly.
- 06:16 AM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- I don't understand cause why this would not be implemented. Maybe open this issue until someone be able to fix it ?
- 02:43 AM Feature #21305 (Rejected): Just discard changed data since snapshot in "rbd rollback" command
- I quickly hacked this up and it turns out the OSDs don't actually allow you to do this. The low-level, internal trans...
- 01:32 AM Feature #21305 (In Progress): Just discard changed data since snapshot in "rbd rollback" command
- 01:32 AM Feature #21216 (Closed): Method to release all rbd locks
- 01:27 AM Feature #4086 (Resolved): rbd: rate-limiting
- 01:26 AM Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
- Closing since 11.x.y is an EOLed release.
- 01:24 AM Bug #22119 (Can't reproduce): Possible deadlock in librbd
- 01:23 AM Bug #22411 (Fix Under Review): [test] valgrind of python tests results in "definitely lost" failure
- *PR*: https://github.com/ceph/teuthology/pull/1139
12/14/2017
- 03:49 PM Bug #20054: librbd memory overhead when used with KVM
- @Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit you on a noticeable scale?
- 02:33 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- 12:17 AM Backport #22395 (In Progress): luminous: librbd: cannot clone all image-metas if we have more tha...
- 12:16 AM Backport #22393 (In Progress): luminous: librbd: cannot copy all image-metas if we have more than...
12/13/2017
- 11:07 PM Backport #22393: luminous: librbd: cannot copy all image-metas if we have more than 64 key/value ...
- https://github.com/ceph/ceph/pull/19504
- 11:04 PM Backport #22395: luminous: librbd: cannot clone all image-metas if we have more than 64 key/value...
- https://github.com/ceph/ceph/pull/19503
- 05:17 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19494
- 02:01 PM Backport #21788 (In Progress): luminous: [journal] image-meta set event should refresh the image ...
- 01:55 PM Backport #21644 (In Progress): luminous: [rbd-mirror] image-meta is not replicated as part of ini...
- 01:52 PM Backport #22375 (In Progress): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- 12:40 PM Backport #22376 (In Progress): luminous: Python RBD metadata_get does not work.
- 08:16 AM Bug #20054: librbd memory overhead when used with KVM
- @Christian, I'll leave it to Jason to continue the conversation about memory allocation, but I can answer this one:
... - 06:28 AM Bug #20054: librbd memory overhead when used with KVM
- That's interesting. That does sound like an almost DOSable exploit. Is the size of those allocations bounded in _any_...
12/12/2017
- 07:36 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
- ...
- 03:48 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian: The cache is zero-copy in that multiple (object size) extents can share a reference to the same backing m...
- 03:29 PM Bug #20054: librbd memory overhead when used with KVM
- That # is whenever I ran multiple tests without killing the Qemu process or rebooting the guest but running the test ...
- 03:25 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian: what does the "# of test in same VM" column represent? Also, FYI, when you configure QEMU in writeback mo...
- 11:30 AM Bug #20054: librbd memory overhead when used with KVM
- Alright. I managed to reproduce this with a reasonably simple setup. I had to use this within Qemu as fio on the host...
- 11:09 AM Backport #21646 (In Progress): luminous: Image-meta should be dynamically refreshed
- 08:44 AM Backport #22396 (Resolved): jewel: librbd: cannot clone all image-metas if we have more than 64 k...
- https://github.com/ceph/ceph/pull/21228
- 08:44 AM Backport #22395 (Resolved): luminous: librbd: cannot clone all image-metas if we have more than 6...
- https://github.com/ceph/ceph/pull/19503
- 08:44 AM Backport #22394 (Resolved): jewel: librbd: cannot copy all image-metas if we have more than 64 ke...
- https://github.com/ceph/ceph/pull/21203
- 08:44 AM Backport #22393 (Resolved): luminous: librbd: cannot copy all image-metas if we have more than 64...
- https://github.com/ceph/ceph/pull/19504
- 08:42 AM Backport #22376 (Resolved): luminous: Python RBD metadata_get does not work.
- https://github.com/ceph/ceph/pull/19479
- 08:42 AM Backport #22375 (Resolved): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- https://github.com/ceph/ceph/pull/19483
12/11/2017
- 09:39 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
- I think Florian built these; maybe ping him about them?
- 12:21 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
- the ocf:ceph:rbd resource agent ist not recognized by pacemaker on centos7 because of wrong file permissions:
pack... - 05:39 PM Feature #22333 (Fix Under Review): rbd-nbd: support optionally setting the device timeout
- PR: https://github.com/ceph/ceph/pull/19436
- 05:29 PM Feature #22333 (In Progress): rbd-nbd: support optionally setting the device timeout
- 01:14 PM Bug #22363 (Resolved): Watchers are lost on active RBD image with running client
- See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022850.html
The issue is observed on a Jewe...
Also available in: Atom