Activity
From 11/06/2017 to 12/05/2017
12/05/2017
- 02:59 PM Bug #22321 (Fix Under Review): ceph 12.2.x Luminous: Build fails with --without-radosgw
- *PR*: https://github.com/ceph/ceph/pull/19343
- 02:11 PM Bug #22321 (In Progress): ceph 12.2.x Luminous: Build fails with --without-radosgw
- 12:32 PM Bug #22321 (Resolved): ceph 12.2.x Luminous: Build fails with --without-radosgw
- [100%] Linking CXX executable ../bin/ceph-dencoder fails when building with --without-radosgw:
@
CMakeFiles/ceph-de... - 02:07 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- Yes, ability to do FAST rollback to latest snapshot will be huge improvement, IMHO.
- 02:00 PM Feature #21305: Just discard changed data since snapshot in "rbd rollback" command
- Only possible if you are rolling back to the most recent snapshot. If you are rolling back to an older snapshot, you ...
- 02:05 PM Feature #21216: Method to release all rbd locks
- My apologies, just came across that note a few days ago, missed it when doing updates. That fixed the issue, many tha...
- 02:02 PM Feature #21216 (Need More Info): Method to release all rbd locks
- @Michael: librbd will automatically delete old, stale locks when it attempts to acquire the lock. It sounds like your...
- 01:55 PM Bug #21814 (Pending Backport): librbd: cannot clone all image-metas if we have more than 64 key/v...
- 01:55 PM Bug #21815 (Pending Backport): librbd: cannot copy all image-metas if we have more than 64 key/va...
- 11:33 AM Bug #22306: Python RBD metadata_get does not work.
- Mark, I assume 'backup-skip' metadata key did not exist before metadata_get call? Otherwise I would not expect the er...
- 10:31 AM Bug #22306 (In Progress): Python RBD metadata_get does not work.
12/04/2017
- 04:38 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- Thanks! Looking forward to having this in 12.2.3 :-).
- 03:34 AM Backport #21700 (In Progress): luminous: rbd-mirror: Allow a different data-pool to be used on th...
- 03:34 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- https://github.com/ceph/ceph/pull/19305
- 02:10 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- I'm working on it.
12/03/2017
- 03:11 PM Bug #22306 (Resolved): Python RBD metadata_get does not work.
- Here is the part of traceback:...
12/02/2017
- 01:31 PM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
- 02:14 AM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
- Ah, I think luminous has already backported this patch, it's still in the openning stat actually.
I rebuild with thi...
12/01/2017
- 12:07 AM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- @Adam: 12.2.3 would definitely be the earliest. If you can open a backport PR, it would help to ensure it makes 12.2.3.
11/30/2017
- 11:54 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- Hi folks,
Any timeline on this? We were hoping it would make 12.2.2, but I think we've missed the boat there. I co... - 03:27 PM Bug #22271 (Need More Info): vdbench's IO drop to 0 when resize the image at the same time
- 03:27 PM Bug #22271: vdbench's IO drop to 0 when resize the image at the same time
- Recently Jason fixed a deadlock triggered in rbd-nbd by resize event [1]. Do you have a chance to try librbd from the...
- 01:27 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
- 01:01 PM Bug #22253: "rbd info" crashed: stack smashing detected
- So, I recompiled 12.2.1 and can no longer reproduce this one. seems to be gone now.
- 01:26 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- Hmm -- you can see that the rbd CLI properly sent a lock request to the current owner:...
- 05:26 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- I have attached the file.
11/29/2017
- 04:15 PM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- @Марк: please run "rbd --debug-rbd=20 image-meta set <image> <key> <value>" and attach the generated log messages fro...
- 09:55 AM Bug #22271 (Duplicate): vdbench's IO drop to 0 when resize the image at the same time
- ...
11/28/2017
- 12:51 AM Backport #22209 (In Progress): jewel: 'rbd du' on empty pool results in output of "specified image"
11/27/2017
- 11:26 PM Backport #22209: jewel: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19186
- 04:46 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: yup, that's why you would copy the "ceph.conf" so that the VM can connect to your vstart-created cluster.
- 04:42 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Jason Dillaman wrote:
> @Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy t... - 04:33 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: vstart is for development. Just install the Ceph client packages on a VM, copy the vstart-generated ceph....
- 03:54 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Jason Dillaman wrote:
> Can you reproduce on distro or Ceph-provided packages instead of your home-grown build?
I... - 03:48 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: that Valgrind output doesn't help since it failed on an "unknown instruction" error. Can you reproduce on...
- 03:30 PM Bug #22253: "rbd info" crashed: stack smashing detected
- Added valgrind output.
@Jason, should I recompile and retest on the latest luminous branch or on v12.2.1? - 03:14 PM Bug #22253: "rbd info" crashed: stack smashing detected
- @Sebastian: without line numbers that actually align to the code at the mentioned version, there really isn't much I ...
- 03:03 PM Bug #22253: "rbd info" crashed: stack smashing detected
- I don't think it is easy to reproduce it, because
* That RBD is untouched, thus no data was ever written to this R... - 02:52 PM Bug #22253 (Need More Info): "rbd info" crashed: stack smashing detected
- @Sebastian: please retest on the latest available version. Your line numbers do not align with v12.2.0.
- 02:51 PM Bug #22253: "rbd info" crashed: stack smashing detected
- my correct version number is:...
- 02:37 PM Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
- Environment: quite small vstart cluster.
This is the stack trace:...
11/25/2017
- 02:07 PM Bug #22055 (Duplicate): ghost rbd snapshot
- Duplicate of issue #19413
11/23/2017
- 11:46 PM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
- 11:45 PM Backport #22174: luminous: possible deadlock in various maintenance operations
- https://github.com/ceph/ceph/pull/19123
- 09:53 AM Backport #22173 (In Progress): jewel: [rbd-nbd] Fedora does not register resize events
- 09:53 AM Backport #22173: jewel: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19115
- 02:00 AM Backport #22208 (In Progress): luminous: 'rbd du' on empty pool results in output of "specified i...
- 01:58 AM Backport #22208: luminous: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19107
11/22/2017
- 11:22 PM Backport #22170 (In Progress): jewel: *** Caught signal (Segmentation fault) ** in thread thread_...
- Built gcc test package with https://github.com/gcc-mirror/gcc/commit/c7db9cf55ae4022f134624db81cc70d694079b6c patch a...
- 02:35 AM Backport #22170 (New): jewel: *** Caught signal (Segmentation fault) ** in thread thread_name:tp_...
- I'm working on it. Hitting jewel internal compiler issue on fc26:
In file included from osd/ECBackend.cc:24:0:
os...
11/21/2017
- 07:22 PM Feature #15322 (Resolved): Support asynchronous v2 image deletion
- 06:41 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19186
- 06:41 PM Backport #22208 (Resolved): luminous: 'rbd du' on empty pool results in output of "specified image"
- https://github.com/ceph/ceph/pull/19107
- 11:06 AM Backport #22174 (New): luminous: possible deadlock in various maintenance operations
- 01:48 AM Backport #22174 (In Progress): luminous: possible deadlock in various maintenance operations
- 10:49 AM Bug #22200 (Pending Backport): 'rbd du' on empty pool results in output of "specified image"
- 10:33 AM Bug #22059: It's impossible to add rbd image-meta key/value on opened image
- Yes, the same. Both are Luminous 12.2.1.
BUT! on some images it works, and on some images it doesn't. Please tell ... - 05:37 AM Backport #22172: luminous: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19066
- 01:47 AM Backport #22172 (In Progress): luminous: [rbd-nbd] Fedora does not register resize events
11/20/2017
- 11:09 PM Backport #22170 (In Progress): jewel: *** Caught signal (Segmentation fault) ** in thread thread_...
- 11:04 AM Backport #22170 (Resolved): jewel: *** Caught signal (Segmentation fault) ** in thread thread_nam...
- https://github.com/ceph/ceph/pull/19098
- 10:20 PM Backport #22190: luminous: class rbd.Image discard----OSError: [errno 2147483648] error discardin...
- https://github.com/ceph/ceph/pull/19058
- 11:05 AM Backport #22190 (Resolved): luminous: class rbd.Image discard----OSError: [errno 2147483648] erro...
- https://github.com/ceph/ceph/pull/19058
- 08:14 PM Backport #22186: jewel: abort in listing mapped nbd devices when running in a container
- Shinobu Kinjo wrote:
-> https://github.com/ceph/ceph/pull/19052-
- 08:09 PM Backport #22186: jewel: abort in listing mapped nbd devices when running in a container
- https://github.com/ceph/ceph/pull/19052
- 11:05 AM Backport #22186 (Resolved): jewel: abort in listing mapped nbd devices when running in a container
- https://github.com/ceph/ceph/pull/20286
- 08:07 PM Backport #22185: luminous: abort in listing mapped nbd devices when running in a container
- https://github.com/ceph/ceph/pull/19051
- 11:05 AM Backport #22185 (Resolved): luminous: abort in listing mapped nbd devices when running in a conta...
- 05:04 PM Bug #22059 (Need More Info): It's impossible to add rbd image-meta key/value on opened image
- @Марк: I am unable to repeat this issue under 12.2.1. Are both your QEMU and rbd CLI clients running the same version?
- 04:05 PM Bug #22200 (Fix Under Review): 'rbd du' on empty pool results in output of "specified image"
- *PR*: https://github.com/ceph/ceph/pull/19045
- 03:55 PM Bug #22200 (Resolved): 'rbd du' on empty pool results in output of "specified image"
- ...
- 11:07 AM Backport #22198 (Resolved): luminous: Compare and write against a clone can result in failure
- https://github.com/ceph/ceph/pull/20211
- 11:05 AM Backport #22191 (Resolved): jewel: class rbd.Image discard----OSError: [errno 2147483648] error d...
- https://github.com/ceph/ceph/pull/20287
- 11:05 AM Backport #22175 (Resolved): jewel: possible deadlock in various maintenance operations
- https://github.com/ceph/ceph/pull/20285
- 11:05 AM Backport #22174 (Resolved): luminous: possible deadlock in various maintenance operations
- https://github.com/ceph/ceph/pull/19123
- 11:05 AM Backport #22173 (Resolved): jewel: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19115
- 11:05 AM Backport #22172 (Resolved): luminous: [rbd-nbd] Fedora does not register resize events
- https://github.com/ceph/ceph/pull/19066
- 11:04 AM Backport #22169 (Resolved): luminous: *** Caught signal (Segmentation fault) ** in thread thread_...
- https://github.com/ceph/ceph/pull/20210
11/18/2017
- 04:08 PM Bug #22158 (Pending Backport): *** Caught signal (Segmentation fault) ** in thread thread_name:tp...
- 01:31 PM Bug #22158 (Fix Under Review): *** Caught signal (Segmentation fault) ** in thread thread_name:tp...
- *PR*: https://github.com/ceph/ceph/pull/19003
- 01:26 PM Bug #22158 (In Progress): *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd
- 03:48 AM Bug #22158 (Resolved): *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd
- After remove parent snap with --force flag while its child and child's clone without deef-flatten enabled,
any oper... - 12:58 PM Feature #15322 (Fix Under Review): Support asynchronous v2 image deletion
- *PR*: https://github.com/ceph/ceph/pull/19000
11/17/2017
- 04:47 PM Bug #22055: ghost rbd snapshot
- Please close task it's DUP of bug with FFFF (Fixed by removing snapshot using Jewel client)
11/16/2017
- 05:33 PM Bug #20789 (Pending Backport): Compare and write against a clone can result in failure
- 11:33 AM Bug #22131 (Pending Backport): [rbd-nbd] Fedora does not register resize events
- 11:30 AM Bug #22120 (Pending Backport): possible deadlock in various maintenance operations
- 11:29 AM Bug #21966 (Pending Backport): class rbd.Image discard----OSError: [errno 2147483648] error disca...
- 01:25 AM Subtask #18786 (Resolved): rbd-mirror A/A: create simple image distribution policy
11/15/2017
- 03:38 PM Bug #22131 (Fix Under Review): [rbd-nbd] Fedora does not register resize events
- *PR*: https://github.com/ceph/ceph/pull/18947
- 03:31 PM Bug #22131 (Resolved): [rbd-nbd] Fedora does not register resize events
- When an image is resized, "/sys/block/nbdX/size" is updated correctly but the associated "/dev/nbdX" will still show ...
11/14/2017
- 09:14 PM Bug #21893 (Resolved): support librbd::RBD::list_children2 and rbd_list_children2
- 03:08 PM Bug #21966 (Fix Under Review): class rbd.Image discard----OSError: [errno 2147483648] error disca...
- *PR*: https://github.com/ceph/ceph/pull/18923
- 02:40 PM Bug #21966 (In Progress): class rbd.Image discard----OSError: [errno 2147483648] error discarding...
- See tracker issue #16465. We cannot change the API, but we can truncate the return value to ensure it doesn't return ...
11/13/2017
- 08:56 PM Feature #15322 (In Progress): Support asynchronous v2 image deletion
- 07:46 PM Bug #22012 (Resolved): rbd-nbd: unused nbd device search bug in container
- 07:46 PM Bug #22011 (Pending Backport): abort in listing mapped nbd devices when running in a container
- 06:34 PM Bug #22119: Possible deadlock in librbd
- @Ivan: yes, I think the issue no longer exists since the problematic code where it performed a synchronous flush oper...
- 06:21 PM Bug #22119: Possible deadlock in librbd
- @Jason: currently we have plans to upgrade Ceph to the latest stable release, but migration is not easy and will not ...
- 04:12 PM Bug #22119 (Need More Info): Possible deadlock in librbd
- @Ivan: can you repeat this issue on a non-EOL version of Ceph? This code no longer exists in Jewel and later releases.
- 03:46 PM Bug #22119 (Can't reproduce): Possible deadlock in librbd
- Hi,
we are using qemu-kvm with ceph/rbd as a storage backend for our VMs. And VMs sometimes hang without leaving a... - 06:30 PM Bug #22120 (Fix Under Review): possible deadlock in various maintenance operations
- *PR*: https://github.com/ceph/ceph/pull/18909
- 05:54 PM Bug #22120 (Resolved): possible deadlock in various maintenance operations
- If an image needs to be refreshed after the start of an API-initiated maintenance operation (i.e. two or more clients...
11/11/2017
- 02:48 AM Bug #20789 (Fix Under Review): Compare and write against a clone can result in failure
- *PR*: https://github.com/ceph/ceph/pull/18887
11/09/2017
- 07:03 AM Backport #21970: luminous: [journal] tags are not being expired if no other clients are registered
- https://github.com/ceph/ceph/pull/18840
- 06:55 AM Backport #21973: luminous: [test] UpdateFeatures RPC message should be included in test_notify.py
- https://github.com/ceph/ceph/pull/18838
- 05:30 AM Backport #22073: luminous: [api] compare-and-write methods not properly advertised
- https://github.com/ceph/ceph/pull/18834
11/08/2017
- 10:25 AM Backport #22073 (Resolved): luminous: [api] compare-and-write methods not properly advertised
- https://github.com/ceph/ceph/pull/18834
11/07/2017
- 06:27 PM Feature #22065 (New): Add RBD pool fsck command
- i.e. something like:
rbd --pool=xxx pool_repair
This command should check whole set of RBD structures (snapshots,... - 06:53 AM Bug #22059 (Closed): It's impossible to add rbd image-meta key/value on opened image
- If RBD image is opened (say, used in virtual machine), `rbd image-meta set` will hang.
If it is not a bug:
1. it ...
11/06/2017
- 10:55 PM Bug #22055 (Duplicate): ghost rbd snapshot
- I upgraded from Kraken to Luminous. Next, I decode to remove snapshot (don't remember if before upgrade of after, and...
- 02:41 PM Bug #22012: rbd-nbd: unused nbd device search bug in container
- *PR*: https://github.com/ceph/ceph/pull/18663
- 02:40 PM Bug #22011: abort in listing mapped nbd devices when running in a container
- *PR*: https://github.com/ceph/ceph/pull/18663
Also available in: Atom