Project

General

Profile

Activity

From 11/21/2017 to 12/20/2017

12/20/2017

09:56 PM Bug #18435 (Fix Under Review): [ FAILED ] TestLibRBD.RenameViaLockOwner
*PR*: https://github.com/ceph/ceph/pull/19618 Jason Dillaman
09:52 PM Bug #18435 (In Progress): [ FAILED ] TestLibRBD.RenameViaLockOwner
Jason Dillaman
09:05 PM Bug #22363 (Need More Info): Watchers are lost on active RBD image with running client
Need a gcore dump of the affected process or aggressive logging enabled (debug ms = 1, debug objecter = 20). Jason Dillaman
09:01 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
Jason Dillaman
01:46 PM Bug #22485 (Pending Backport): [test] rbd-mirror split brain test case can have a false-positive ...
Mykola Golub
11:53 AM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
https://github.com/ceph/ceph/pull/19644 Nathan Cutler
11:53 AM Backport #22497 (Resolved): luminous: [rbd-mirror] new pools might not be detected
https://github.com/ceph/ceph/pull/19625 Nathan Cutler

12/19/2017

09:31 PM Bug #22485 (Fix Under Review): [test] rbd-mirror split brain test case can have a false-positive ...
*PR*: https://github.com/ceph/ceph/pull/19604 Jason Dillaman
08:50 PM Bug #22485 (Resolved): [test] rbd-mirror split brain test case can have a false-positive failure ...
The "split-brain" test under teuthology has two running rbd-mirror daemons (one for each cluster) which can result in... Jason Dillaman

12/18/2017

03:09 PM Bug #20054: librbd memory overhead when used with KVM
Christian Theune wrote:
> @Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit yo...
Florian Haas
02:44 PM Feature #18480 (In Progress): rbd-mirror: support cloning an image from a non-primary snapshot
Jason Dillaman

12/17/2017

09:03 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
Nathan Cutler

12/16/2017

02:33 PM Bug #22461 (Pending Backport): [rbd-mirror] new pools might not be detected
Mykola Golub

12/15/2017

08:24 PM Backport #22454: luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
https://github.com/ceph/ceph/pull/19554 Shinobu Kinjo
11:47 AM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
https://github.com/ceph/ceph/pull/19554 Nathan Cutler
08:06 PM Bug #22461 (Fix Under Review): [rbd-mirror] new pools might not be detected
*PR*: https://github.com/ceph/ceph/pull/19550 Jason Dillaman
07:49 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
The 'Rados::pool_list2' command will not necessarily ask for the latest OSD map, so the list it returns might be out-... Jason Dillaman
02:08 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
There is nothing RBD can do -- you can perhaps open a new ticket against RADOS to optimize snap rollback directly. Jason Dillaman
06:16 AM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
I don't understand cause why this would not be implemented. Maybe open this issue until someone be able to fix it ? Марк Коренберг
02:43 AM Feature #21305 (Rejected): Just discard changed data since snapshot in "rbd rollback" command
I quickly hacked this up and it turns out the OSDs don't actually allow you to do this. The low-level, internal trans... Jason Dillaman
01:32 AM Feature #21305 (In Progress): Just discard changed data since snapshot in "rbd rollback" command
Jason Dillaman
01:32 AM Feature #21216 (Closed): Method to release all rbd locks
Jason Dillaman
01:27 AM Feature #4086 (Resolved): rbd: rate-limiting
Jason Dillaman
01:26 AM Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
Closing since 11.x.y is an EOLed release. Jason Dillaman
01:24 AM Bug #22119 (Can't reproduce): Possible deadlock in librbd
Jason Dillaman
01:23 AM Bug #22411 (Fix Under Review): [test] valgrind of python tests results in "definitely lost" failure
*PR*: https://github.com/ceph/teuthology/pull/1139 Jason Dillaman

12/14/2017

03:49 PM Bug #20054: librbd memory overhead when used with KVM
@Florian: So I guess extreme memory overhead (in the range of multiple GiBs) hasn't hit you on a noticeable scale? Christian Theune
02:33 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
Sage Weil
12:17 AM Backport #22395 (In Progress): luminous: librbd: cannot clone all image-metas if we have more tha...
Jason Dillaman
12:16 AM Backport #22393 (In Progress): luminous: librbd: cannot copy all image-metas if we have more than...
Jason Dillaman

12/13/2017

11:07 PM Backport #22393: luminous: librbd: cannot copy all image-metas if we have more than 64 key/value ...
https://github.com/ceph/ceph/pull/19504 Shinobu Kinjo
11:04 PM Backport #22395: luminous: librbd: cannot clone all image-metas if we have more than 64 key/value...
https://github.com/ceph/ceph/pull/19503 Shinobu Kinjo
05:17 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
https://github.com/ceph/ceph/pull/19494 Nathan Cutler
02:01 PM Backport #21788 (In Progress): luminous: [journal] image-meta set event should refresh the image ...
Jason Dillaman
01:55 PM Backport #21644 (In Progress): luminous: [rbd-mirror] image-meta is not replicated as part of ini...
Jason Dillaman
01:52 PM Backport #22375 (In Progress): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
Nathan Cutler
12:40 PM Backport #22376 (In Progress): luminous: Python RBD metadata_get does not work.
Nathan Cutler
08:16 AM Bug #20054: librbd memory overhead when used with KVM
@Christian, I'll leave it to Jason to continue the conversation about memory allocation, but I can answer this one:
...
Florian Haas
06:28 AM Bug #20054: librbd memory overhead when used with KVM
That's interesting. That does sound like an almost DOSable exploit. Is the size of those allocations bounded in _any_... Christian Theune

12/12/2017

07:36 PM Bug #22411 (Resolved): [test] valgrind of python tests results in "definitely lost" failure
... Jason Dillaman
03:48 PM Bug #20054: librbd memory overhead when used with KVM
@Christian: The cache is zero-copy in that multiple (object size) extents can share a reference to the same backing m... Jason Dillaman
03:29 PM Bug #20054: librbd memory overhead when used with KVM
That # is whenever I ran multiple tests without killing the Qemu process or rebooting the guest but running the test ... Christian Theune
03:25 PM Bug #20054: librbd memory overhead when used with KVM
@Christian: what does the "# of test in same VM" column represent? Also, FYI, when you configure QEMU in writeback mo... Jason Dillaman
11:30 AM Bug #20054: librbd memory overhead when used with KVM
Alright. I managed to reproduce this with a reasonably simple setup. I had to use this within Qemu as fio on the host... Christian Theune
11:09 AM Backport #21646 (In Progress): luminous: Image-meta should be dynamically refreshed
Nathan Cutler
08:44 AM Backport #22396 (Resolved): jewel: librbd: cannot clone all image-metas if we have more than 64 k...
https://github.com/ceph/ceph/pull/21228 Nathan Cutler
08:44 AM Backport #22395 (Resolved): luminous: librbd: cannot clone all image-metas if we have more than 6...
https://github.com/ceph/ceph/pull/19503 Nathan Cutler
08:44 AM Backport #22394 (Resolved): jewel: librbd: cannot copy all image-metas if we have more than 64 ke...
https://github.com/ceph/ceph/pull/21203 Nathan Cutler
08:44 AM Backport #22393 (Resolved): luminous: librbd: cannot copy all image-metas if we have more than 64...
https://github.com/ceph/ceph/pull/19504 Nathan Cutler
08:42 AM Backport #22376 (Resolved): luminous: Python RBD metadata_get does not work.
https://github.com/ceph/ceph/pull/19479 Nathan Cutler
08:42 AM Backport #22375 (Resolved): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
https://github.com/ceph/ceph/pull/19483 Nathan Cutler

12/11/2017

09:39 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
I think Florian built these; maybe ping him about them? Greg Farnum
12:21 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
the ocf:ceph:rbd resource agent ist not recognized by pacemaker on centos7 because of wrong file permissions:
pack...
Wolfgang Lendl
05:39 PM Feature #22333 (Fix Under Review): rbd-nbd: support optionally setting the device timeout
PR: https://github.com/ceph/ceph/pull/19436 Mykola Golub
05:29 PM Feature #22333 (In Progress): rbd-nbd: support optionally setting the device timeout
Mykola Golub
01:14 PM Bug #22363 (Resolved): Watchers are lost on active RBD image with running client
See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022850.html
The issue is observed on a Jewe...
Wido den Hollander

12/08/2017

01:00 PM Bug #20054: librbd memory overhead when used with KVM
@Christian:
> Interestingly does that mean that larger Ceph clusters will have an increasing memory overhead for ...
Jason Dillaman
12:38 PM Bug #20054: librbd memory overhead when used with KVM
So I tried freeze/thaw/snapshot/delete snapshot cycles while under heavy load from small IOs (4k) and large IOs (512k... Christian Theune
12:34 PM Bug #20054: librbd memory overhead when used with KVM
Dang! :)
We did some calculations yesterday and came up with an average rate of 200kib per 10 minutes of "leak". T...
Christian Theune
10:37 AM Bug #18938 (New): Unable to build 11.2.0 under i686
Sorry, Sebastien ! i missed your latest comment. seems i fixed the issue reported by Romain, but not yours. i am reop... Kefu Chai

12/07/2017

09:42 PM Bug #20054: librbd memory overhead when used with KVM
@Christian: I really need a reproducer since w/o the heap profiling, I cannot determine where the allocations are occ... Jason Dillaman
06:20 PM Bug #20054: librbd memory overhead when used with KVM
@jason: Alright.
I extracted all memory regions from a running vm getting a coredump and looking at the smaps segm...
Christian Theune
02:48 PM Bug #20054: librbd memory overhead when used with KVM
I wasn't able to reproduce this at all. However. Here's something I just started doing with some help from a Qemu dev... Christian Theune
02:34 PM Bug #22306 (Pending Backport): Python RBD metadata_get does not work.
Jason Dillaman

12/06/2017

02:23 PM Feature #22333 (Resolved): rbd-nbd: support optionally setting the device timeout
The kernel will default to a 30 second request timeout. Allow the user to optionally specify an alternate timeout. Jason Dillaman
10:18 AM Bug #22321 (Pending Backport): ceph 12.2.x Luminous: Build fails with --without-radosgw
Mykola Golub

12/05/2017

02:59 PM Bug #22321 (Fix Under Review): ceph 12.2.x Luminous: Build fails with --without-radosgw
*PR*: https://github.com/ceph/ceph/pull/19343 Jason Dillaman
02:11 PM Bug #22321 (In Progress): ceph 12.2.x Luminous: Build fails with --without-radosgw
Jason Dillaman
12:32 PM Bug #22321 (Resolved): ceph 12.2.x Luminous: Build fails with --without-radosgw
[100%] Linking CXX executable ../bin/ceph-dencoder fails when building with --without-radosgw:
@
CMakeFiles/ceph-de...
Deniss Slim
02:07 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
Yes, ability to do FAST rollback to latest snapshot will be huge improvement, IMHO. Марк Коренберг
02:00 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
Only possible if you are rolling back to the most recent snapshot. If you are rolling back to an older snapshot, you ... Jason Dillaman
02:05 PM Feature #21216: Method to release all rbd locks
My apologies, just came across that note a few days ago, missed it when doing updates. That fixed the issue, many tha... Michael Sudnick
02:02 PM Feature #21216 (Need More Info): Method to release all rbd locks
@Michael: librbd will automatically delete old, stale locks when it attempts to acquire the lock. It sounds like your... Jason Dillaman
01:55 PM Bug #21814 (Pending Backport): librbd: cannot clone all image-metas if we have more than 64 key/v...
Jason Dillaman
01:55 PM Bug #21815 (Pending Backport): librbd: cannot copy all image-metas if we have more than 64 key/va...
Jason Dillaman
11:33 AM Bug #22306: Python RBD metadata_get does not work.
Mark, I assume 'backup-skip' metadata key did not exist before metadata_get call? Otherwise I would not expect the er... Mykola Golub
10:31 AM Bug #22306 (In Progress): Python RBD metadata_get does not work.
Mykola Golub

12/04/2017

04:38 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
Thanks! Looking forward to having this in 12.2.3 :-). Adam Wolfe Gordon
03:34 AM Backport #21700 (In Progress): luminous: rbd-mirror: Allow a different data-pool to be used on th...
Prashant D
03:34 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
https://github.com/ceph/ceph/pull/19305 Prashant D
02:10 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
I'm working on it. Prashant D

12/03/2017

03:11 PM Bug #22306 (Resolved): Python RBD metadata_get does not work.
Here is the part of traceback:... Марк Коренберг

12/02/2017

01:31 PM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
Jason Dillaman
02:14 AM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
Ah, I think luminous has already backported this patch, it's still in the openning stat actually.
I rebuild with thi...
wb song

12/01/2017

12:07 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
@Adam: 12.2.3 would definitely be the earliest. If you can open a backport PR, it would help to ensure it makes 12.2.3. Jason Dillaman

11/30/2017

11:54 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
Hi folks,
Any timeline on this? We were hoping it would make 12.2.2, but I think we've missed the boat there. I co...
Adam Wolfe Gordon
03:27 PM Bug #22271 (Need More Info): vdbench's IO drop to 0 when resize the image at the same time
Mykola Golub
03:27 PM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
Recently Jason fixed a deadlock triggered in rbd-nbd by resize event [1]. Do you have a chance to try librbd from the... Mykola Golub
01:27 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
Jason Dillaman
01:01 PM Bug #22253: "rbd info" crashed: stack smashing detected
So, I recompiled 12.2.1 and can no longer reproduce this one. seems to be gone now. Sebastian Wagner
01:26 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
Hmm -- you can see that the rbd CLI properly sent a lock request to the current owner:... Jason Dillaman
05:26 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
I have attached the file. Марк Коренберг

11/29/2017

04:15 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
@Марк: please run "rbd --debug-rbd=20 image-meta set <image> <key> <value>" and attach the generated log messages fro... Jason Dillaman
09:55 AM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
... wb song

11/28/2017

12:51 AM Backport #22209 (In Progress): jewel: 'rbd du' on empty pool results in output of "specified image"
Prashant D

11/27/2017

11:26 PM Backport #22209: jewel: 'rbd du' on empty pool results in output of "specified image"
https://github.com/ceph/ceph/pull/19186 Prashant D
04:46 PM Bug #22253: "rbd info" crashed: stack smashing detected
@Sebastian: yup, that's why you would copy the "ceph.conf" so that the VM can connect to your vstart-created cluster. Jason Dillaman
04:42 PM Bug #22253: "rbd info" crashed: stack smashing detected
Jason Dillaman wrote:
> @Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy t...
Sebastian Wagner
04:33 PM Bug #22253: "rbd info" crashed: stack smashing detected
@Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy the vstart-generated ceph.... Jason Dillaman
03:54 PM Bug #22253: "rbd info" crashed: stack smashing detected
Jason Dillaman wrote:
> Can you reproduce on distro or Ceph-provided packages instead of your home-grown build?
I...
Sebastian Wagner
03:48 PM Bug #22253: "rbd info" crashed: stack smashing detected
@Sebastian: that Valgrind output doesn't help since it failed on an "unknown instruction" error. Can you reproduce on... Jason Dillaman
03:30 PM Bug #22253: "rbd info" crashed: stack smashing detected
Added valgrind output.
@Jason, should I recompile and retest on the latest luminous branch or on v12.2.1?
Sebastian Wagner
03:14 PM Bug #22253: "rbd info" crashed: stack smashing detected
@Sebastian: without line numbers that actually align to the code at the mentioned version, there really isn't much I ... Jason Dillaman
03:03 PM Bug #22253: "rbd info" crashed: stack smashing detected
I don't think it is easy to reproduce it, because
* That RBD is untouched, thus no data was ever written to this R...
Sebastian Wagner
02:52 PM Bug #22253 (Need More Info): "rbd info" crashed: stack smashing detected
@Sebastian: please retest on the latest available version. Your line numbers do not align with v12.2.0. Jason Dillaman
02:51 PM Bug #22253: "rbd info" crashed: stack smashing detected
my correct version number is:... Sebastian Wagner
02:37 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
Environment: quite small vstart cluster.
This is the stack trace:...
Sebastian Wagner

11/25/2017

02:07 PM Bug #22055 (Duplicate): ghost rbd snapshot
Duplicate of issue #19413 Jason Dillaman

11/23/2017

11:46 PM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
Prashant D
11:45 PM Backport #22174: luminous: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/19123 Prashant D
09:53 AM Backport #22173 (In Progress): jewel: [rbd-nbd] Fedora does not register resize events
Prashant D
09:53 AM Backport #22173: jewel: [rbd-nbd] Fedora does not register resize events
https://github.com/ceph/ceph/pull/19115 Prashant D
02:00 AM Backport #22208 (In Progress): luminous: 'rbd du' on empty pool results in output of "specified i...
Prashant D
01:58 AM Backport #22208: luminous: 'rbd du' on empty pool results in output of "specified image"
https://github.com/ceph/ceph/pull/19107 Prashant D

11/22/2017

11:22 PM Backport #22170 (In Progress): jewel: *** Caught signal (Segmentation fault) ** in thread thread_...
Built gcc test package with https://github.com/gcc-mirror/gcc/commit/c7db9cf55ae4022f134624db81cc70d694079b6c patch a... Prashant D
02:35 AM Backport #22170 (New): jewel: *** Caught signal (Segmentation fault) ** in thread thread_name:tp_...
I'm working on it. Hitting jewel internal compiler issue on fc26:
In file included from osd/ECBackend.cc:24:0:
os...
Prashant D

11/21/2017

07:22 PM Feature #15322 (Resolved): Support asynchronous v2 image deletion
Mykola Golub
06:41 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
https://github.com/ceph/ceph/pull/19186 Nathan Cutler
06:41 PM Backport #22208 (Resolved): luminous: 'rbd du' on empty pool results in output of "specified image"
https://github.com/ceph/ceph/pull/19107 Nathan Cutler
11:06 AM Backport #22174 (New): luminous: possible deadlock in various maintenance operations
Prashant D
01:48 AM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
Prashant D
10:49 AM Bug #22200 (Pending Backport): 'rbd du' on empty pool results in output of "specified image"
Mykola Golub
10:33 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
Yes, the same. Both are Luminous 12.2.1.
BUT! on some images it works, and on some images it doesn't. Please tell ...
Марк Коренберг
05:37 AM Backport #22172: luminous: [rbd-nbd] Fedora does not register resize events
https://github.com/ceph/ceph/pull/19066 Prashant D
01:47 AM Backport #22172 (In Progress): luminous: [rbd-nbd] Fedora does not register resize events
Prashant D
 

Also available in: Atom