Activity
From 11/21/2017 to 12/20/2017
12/20/2017
- 09:56 PM Bug #18435 (Fix Under Review): [ FAILED ] TestLibRBD.RenameViaLockOwner
- *PR*: https://github.com/ceph/ceph/pull/19618
- 09:52 PM Bug #18435 (In Progress): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 09:05 PM Bug #22363 (Need More Info): Watchers are lost on active RBD image with running client
- Need a gcore dump of the affected process or aggressive logging enabled (debug ms = 1, debug objecter = 20).
- 09:01 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
- 01:46 PM Bug #22485 (Pending Backport): [test] rbd-mirror split brain test case can have a false-positive ...
- 11:53 AM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19644
- 11:53 AM Backport #22497 (Resolved): luminous: [rbd-mirror] new pools might not be detected
- https://github.com/ceph/ceph/pull/19625
12/19/2017
- 09:31 PM Bug #22485 (Fix Under Review): [test] rbd-mirror split brain test case can have a false-positive ...
- *PR*: https://github.com/ceph/ceph/pull/19604
- 08:50 PM Bug #22485 (Resolved): [test] rbd-mirror split brain test case can have a false-positive failure ...
- The "split-brain" test under teuthology has two running rbd-mirror daemons (one for each cluster) which can result in...
12/18/2017
- 03:09 PM Bug #20054: librbd memory overhead when used with KVM
- Christian Theune wrote:
> @Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit yo... - 02:44 PM Feature #18480 (In Progress): rbd-mirror: support cloning an image from a non-primary snapshot
12/17/2017
- 09:03 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
12/16/2017
12/15/2017
- 08:24 PM Backport #22454: luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19554
- 11:47 AM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19554
- 08:06 PM Bug #22461 (Fix Under Review): [rbd-mirror] new pools might not be detected
- *PR*: https://github.com/ceph/ceph/pull/19550
- 07:49 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
- The 'Rados::pool_list2' command will not necessarily ask for the latest OSD map, so the list it returns might be out-...
- 02:08 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- There is nothing RBD can do -- you can perhaps open a new ticket against RADOS to optimize snap rollback directly.
- 06:16 AM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- I don't understand cause why this would not be implemented. Maybe open this issue until someone be able to fix it ?
- 02:43 AM Feature #21305 (Rejected): Just discard changed data since snapshot in "rbd rollback" command
- I quickly hacked this up and it turns out the OSDs don't actually allow you to do this. The low-level, internal trans...
- 01:32 AM Feature #21305 (In Progress): Just discard changed data since snapshot in "rbd rollback" command
- 01:32 AM Feature #21216 (Closed): Method to release all rbd locks
- 01:27 AM Feature #4086 (Resolved): rbd: rate-limiting
- 01:26 AM Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
- Closing since 11.x.y is an EOLed release.
- 01:24 AM Bug #22119 (Can't reproduce): Possible deadlock in librbd
- 01:23 AM Bug #22411 (Fix Under Review): [test] valgrind of python tests results in "definitely lost" failure
- *PR*: https://github.com/ceph/teuthology/pull/1139
12/14/2017
- 03:49 PM Bug #20054: librbd memory overhead when used with KVM
- @Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit you on a noticeable scale?
- 02:33 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- 12:17 AM Backport #22395 (In Progress): luminous: librbd: cannot clone all image-metas if we have more tha...
- 12:16 AM Backport #22393 (In Progress): luminous: librbd: cannot copy all image-metas if we have more than...
12/13/2017
- 11:07 PM Backport #22393: luminous: librbd: cannot copy all image-metas if we have more than 64 key/value ...
- https://github.com/ceph/ceph/pull/19504
- 11:04 PM Backport #22395: luminous: librbd: cannot clone all image-metas if we have more than 64 key/value...
- https://github.com/ceph/ceph/pull/19503
- 05:17 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
- https://github.com/ceph/ceph/pull/19494
- 02:01 PM Backport #21788 (In Progress): luminous: [journal] image-meta set event should refresh the image ...
- 01:55 PM Backport #21644 (In Progress): luminous: [rbd-mirror] image-meta is not replicated as part of ini...
- 01:52 PM Backport #22375 (In Progress): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- 12:40 PM Backport #22376 (In Progress): luminous: Python RBD metadata_get does not work.
- 08:16 AM Bug #20054: librbd memory overhead when used with KVM
- @Christian, I'll leave it to Jason to continue the conversation about memory allocation, but I can answer this one:
... - 06:28 AM Bug #20054: librbd memory overhead when used with KVM
- That's interesting. That does sound like an almost DOSable exploit. Is the size of those allocations bounded in _any_...
12/12/2017
- 07:36 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
- ...
- 03:48 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian: The cache is zero-copy in that multiple (object size) extents can share a reference to the same backing m...
- 03:29 PM Bug #20054: librbd memory overhead when used with KVM
- That # is whenever I ran multiple tests without killing the Qemu process or rebooting the guest but running the test ...
- 03:25 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian: what does the "# of test in same VM" column represent? Also, FYI, when you configure QEMU in writeback mo...
- 11:30 AM Bug #20054: librbd memory overhead when used with KVM
- Alright. I managed to reproduce this with a reasonably simple setup. I had to use this within Qemu as fio on the host...
- 11:09 AM Backport #21646 (In Progress): luminous: Image-meta should be dynamically refreshed
- 08:44 AM Backport #22396 (Resolved): jewel: librbd: cannot clone all image-metas if we have more than 64 k...
- https://github.com/ceph/ceph/pull/21228
- 08:44 AM Backport #22395 (Resolved): luminous: librbd: cannot clone all image-metas if we have more than 6...
- https://github.com/ceph/ceph/pull/19503
- 08:44 AM Backport #22394 (Resolved): jewel: librbd: cannot copy all image-metas if we have more than 64 ke...
- https://github.com/ceph/ceph/pull/21203
- 08:44 AM Backport #22393 (Resolved): luminous: librbd: cannot copy all image-metas if we have more than 64...
- https://github.com/ceph/ceph/pull/19504
- 08:42 AM Backport #22376 (Resolved): luminous: Python RBD metadata_get does not work.
- https://github.com/ceph/ceph/pull/19479
- 08:42 AM Backport #22375 (Resolved): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- https://github.com/ceph/ceph/pull/19483
12/11/2017
- 09:39 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
- I think Florian built these; maybe ping him about them?
- 12:21 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
- the ocf:ceph:rbd resource agent ist not recognized by pacemaker on centos7 because of wrong file permissions:
pack... - 05:39 PM Feature #22333 (Fix Under Review): rbd-nbd: support optionally setting the device timeout
- PR: https://github.com/ceph/ceph/pull/19436
- 05:29 PM Feature #22333 (In Progress): rbd-nbd: support optionally setting the device timeout
- 01:14 PM Bug #22363 (Resolved): Watchers are lost on active RBD image with running client
- See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022850.html
The issue is observed on a Jewe...
12/08/2017
- 01:00 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian:
> Interestingly does that mean that larger Ceph clusters will have an increasing memory overhead for ... - 12:38 PM Bug #20054: librbd memory overhead when used with KVM
- So I tried freeze/thaw/snapshot/delete snapshot cycles while under heavy load from small IOs (4k) and large IOs (512k...
- 12:34 PM Bug #20054: librbd memory overhead when used with KVM
- Dang! :)
We did some calculations yesterday and came up with an average rate of 200kib per 10 minutes of "leak". T... - 10:37 AM Bug #18938 (New): Unable to build 11.2.0 under i686
- Sorry, Sebastien ! i missed your latest comment. seems i fixed the issue reported by Romain, but not yours. i am reop...
12/07/2017
- 09:42 PM Bug #20054: librbd memory overhead when used with KVM
- @Christian: I really need a reproducer since w/o the heap profiling, I cannot determine where the allocations are occ...
- 06:20 PM Bug #20054: librbd memory overhead when used with KVM
- @jason: Alright.
I extracted all memory regions from a running vm getting a coredump and looking at the smaps segm... - 02:48 PM Bug #20054: librbd memory overhead when used with KVM
- I wasn't able to reproduce this at all. However. Here's something I just started doing with some help from a Qemu dev...
- 02:34 PM Bug #22306 (Pending Backport): Python RBD metadata_get does not work.
12/06/2017
- 02:23 PM Feature #22333 (Resolved): rbd-nbd: support optionally setting the device timeout
- The kernel will default to a 30 second request timeout. Allow the user to optionally specify an alternate timeout.
- 10:18 AM Bug #22321 (Pending Backport): ceph 12.2.x Luminous: Build fails with --without-radosgw
12/05/2017
- 02:59 PM Bug #22321 (Fix Under Review): ceph 12.2.x Luminous: Build fails with --without-radosgw
- *PR*: https://github.com/ceph/ceph/pull/19343
- 02:11 PM Bug #22321 (In Progress): ceph 12.2.x Luminous: Build fails with --without-radosgw
- 12:32 PM Bug #22321 (Resolved): ceph 12.2.x Luminous: Build fails with --without-radosgw
- [100%] Linking CXX executable ../bin/ceph-dencoder fails when building with --without-radosgw:
@
CMakeFiles/ceph-de... - 02:07 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- Yes, ability to do FAST rollback to latest snapshot will be huge improvement, IMHO.
- 02:00 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- Only possible if you are rolling back to the most recent snapshot. If you are rolling back to an older snapshot, you ...
- 02:05 PM Feature #21216: Method to release all rbd locks
- My apologies, just came across that note a few days ago, missed it when doing updates. That fixed the issue, many tha...
- 02:02 PM Feature #21216 (Need More Info): Method to release all rbd locks
- @Michael: librbd will automatically delete old, stale locks when it attempts to acquire the lock. It sounds like your...
- 01:55 PM Bug #21814 (Pending Backport): librbd: cannot clone all image-metas if we have more than 64 key/v...
- 01:55 PM Bug #21815 (Pending Backport): librbd: cannot copy all image-metas if we have more than 64 key/va...
- 11:33 AM Bug #22306: Python RBD metadata_get does not work.
- Mark, I assume 'backup-skip' metadata key did not exist before metadata_get call? Otherwise I would not expect the er...
- 10:31 AM Bug #22306 (In Progress): Python RBD metadata_get does not work.
12/04/2017
- 04:38 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- Thanks! Looking forward to having this in 12.2.3 :-).
- 03:34 AM Backport #21700 (In Progress): luminous: rbd-mirror: Allow a different data-pool to be used on th...
- 03:34 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- https://github.com/ceph/ceph/pull/19305
- 02:10 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- I'm working on it.
12/03/2017
- 03:11 PM Bug #22306 (Resolved): Python RBD metadata_get does not work.
- Here is the part of traceback:...
12/02/2017
- 01:31 PM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
- 02:14 AM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
- Ah, I think luminous has already backported this patch, it's still in the openning stat actually.
I rebuild with thi...
12/01/2017
- 12:07 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- @Adam: 12.2.3 would definitely be the earliest. If you can open a backport PR, it would help to ensure it makes 12.2.3.
11/30/2017
- 11:54 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- Hi folks,
Any timeline on this? We were hoping it would make 12.2.2, but I think we've missed the boat there. I co... - 03:27 PM Bug #22271 (Need More Info): vdbench's IO drop to 0 when resize the image at the same time
- 03:27 PM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
- Recently Jason fixed a deadlock triggered in rbd-nbd by resize event [1]. Do you have a chance to try librbd from the...
- 01:27 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
- 01:01 PM Bug #22253: "rbd info" crashed: stack smashing detected
- So, I recompiled 12.2.1 and can no longer reproduce this one. seems to be gone now.
- 01:26 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- Hmm -- you can see that the rbd CLI properly sent a lock request to the current owner:...
- 05:26 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- I have attached the file.
11/29/2017
- 04:15 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- @Марк: please run "rbd --debug-rbd=20 image-meta set <image> <key> <value>" and attach the generated log messages fro...
- 09:55 AM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
- ...
11/28/2017
- 12:51 AM Backport #22209 (In Progress): jewel: 'rbd du' on empty pool results in output of "specified image"
11/27/2017
- 11:26 PM Backport #22209: jewel: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19186
- 04:46 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: yup, that's why you would copy the "ceph.conf" so that the VM can connect to your vstart-created cluster.
- 04:42 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Jason Dillaman wrote:
> @Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy t... - 04:33 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy the vstart-generated ceph....
- 03:54 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Jason Dillaman wrote:
> Can you reproduce on distro or Ceph-provided packages instead of your home-grown build?
I... - 03:48 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: that Valgrind output doesn't help since it failed on an "unknown instruction" error. Can you reproduce on...
- 03:30 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Added valgrind output.
@Jason, should I recompile and retest on the latest luminous branch or on v12.2.1? - 03:14 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: without line numbers that actually align to the code at the mentioned version, there really isn't much I ...
- 03:03 PM Bug #22253: "rbd info" crashed: stack smashing detected
- I don't think it is easy to reproduce it, because
* That RBD is untouched, thus no data was ever written to this R... - 02:52 PM Bug #22253 (Need More Info): "rbd info" crashed: stack smashing detected
- @Sebastian: please retest on the latest available version. Your line numbers do not align with v12.2.0.
- 02:51 PM Bug #22253: "rbd info" crashed: stack smashing detected
- my correct version number is:...
- 02:37 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
- Environment: quite small vstart cluster.
This is the stack trace:...
11/25/2017
- 02:07 PM Bug #22055 (Duplicate): ghost rbd snapshot
- Duplicate of issue #19413
11/23/2017
- 11:46 PM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
- 11:45 PM Backport #22174: luminous: possible deadlock in various maintenance operations
- https://github.com/ceph/ceph/pull/19123
- 09:53 AM Backport #22173 (In Progress): jewel: [rbd-nbd] Fedora does not register resize events
- 09:53 AM Backport #22173: jewel: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19115
- 02:00 AM Backport #22208 (In Progress): luminous: 'rbd du' on empty pool results in output of "specified i...
- 01:58 AM Backport #22208: luminous: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19107
11/22/2017
- 11:22 PM Backport #22170 (In Progress): jewel: *** Caught signal (Segmentation fault) ** in thread thread_...
- Built gcc test package with https://github.com/gcc-mirror/gcc/commit/c7db9cf55ae4022f134624db81cc70d694079b6c patch a...
- 02:35 AM Backport #22170 (New): jewel: *** Caught signal (Segmentation fault) ** in thread thread_name:tp_...
- I'm working on it. Hitting jewel internal compiler issue on fc26:
In file included from osd/ECBackend.cc:24:0:
os...
11/21/2017
- 07:22 PM Feature #15322 (Resolved): Support asynchronous v2 image deletion
- 06:41 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19186
- 06:41 PM Backport #22208 (Resolved): luminous: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19107
- 11:06 AM Backport #22174 (New): luminous: possible deadlock in various maintenance operations
- 01:48 AM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
- 10:49 AM Bug #22200 (Pending Backport): 'rbd du' on empty pool results in output of "specified image"
- 10:33 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- Yes, the same. Both are Luminous 12.2.1.
BUT! on some images it works, and on some images it doesn't. Please tell ... - 05:37 AM Backport #22172: luminous: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19066
- 01:47 AM Backport #22172 (In Progress): luminous: [rbd-nbd] Fedora does not register resize events
Also available in: Atom