Project

General

Profile

Activity

From 03/28/2018 to 04/26/2018

04/26/2018

09:37 PM Bug #23891 (Duplicate): unable to perform a "rbd-nbd map" without forgroud flag
I would like to map a rbd using rbd-nbd. Without adding the foreground flag it is not possible to map the device.
...
Marc Schöchlin
04:26 PM Bug #23888 (Fix Under Review): [rbd-mirror] asok hook for image replayer not re-registered after ...
*PR*: https://github.com/ceph/ceph/pull/21682 Jason Dillaman
02:48 PM Bug #23888 (Resolved): [rbd-mirror] asok hook for image replayer not re-registered after bootstrap
If the remote image is not primary, if the local image is primary, if the images have split-brained, or other error c... Jason Dillaman
04:22 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
@Marc: can you double-check that you have the debuginfo packages installed? The perf graph (attached) shows the vast ... Jason Dillaman
04:09 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
Sorry, the results exceed the maximum file upload limit:
I uploaded the file to my personal server:
https://www...
Marc Schöchlin
11:54 AM Bug #23853 (Need More Info): Inefficent implementation - very long query time for "rbd ls -l" que...
@Marc: can you re-run this using the following the steps (making sure you have the debuginfo packages installed as we... Jason Dillaman
10:36 AM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
Added perf data, created by :... Marc Schöchlin

04/25/2018

11:42 PM Bug #23876 (Fix Under Review): [rbd-mirror] local tag predecessor mirror uuid is incorrectly repl...
*PR*: https://github.com/ceph/ceph/pull/21657 Jason Dillaman
11:39 PM Bug #23876 (Resolved): [rbd-mirror] local tag predecessor mirror uuid is incorrectly replaced wit...
The tag predecessor mirror uuid that is retrieved from the remote peer is incorrectly converted to the remote mirror'... Jason Dillaman
03:24 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
According to this, latency between client and osd should not be the problem:
(according to the high amount of user t...
Marc Schöchlin
02:01 PM Bug #23853: Inefficent implementation - very long query time for "rbd ls -l" queries
If i invoke this without parallel thread execution i get the following result:... Marc Schöchlin
12:27 PM Bug #23853 (Resolved): Inefficent implementation - very long query time for "rbd ls -l" queries
We are trying to integrate a storage repository in xenserver.
*Summary:*
The slowness is a real pain for us, be...
Marc Schöchlin

04/24/2018

05:32 PM Bug #23809 (Resolved): [test] output formatting tests are heavily broken
Sage Weil

04/20/2018

03:33 PM Bug #23809 (Fix Under Review): [test] output formatting tests are heavily broken
*PR*: https://github.com/ceph/ceph/pull/21564 Jason Dillaman
02:51 PM Bug #23809 (Resolved): [test] output formatting tests are heavily broken
Unfortunately, PR 19117 broke numerous RBD tests that expected certain output formatting since the test was merged w/... Jason Dillaman
12:51 PM Subtask #18753 (Fix Under Review): rbd-mirror HA: create teuthology thrasher for rbd-mirror
Jason Dillaman
12:51 PM Subtask #18753: rbd-mirror HA: create teuthology thrasher for rbd-mirror
*PR*: https://github.com/ceph/ceph/pull/21541 Jason Dillaman

04/18/2018

08:35 PM Bug #23789 (Resolved): luminous: "cluster [WRN] Manager daemon x is unresponsive. No standby daem...
This is v12.2.5 QE validation
Intermediate issue, if possible would be good to fix/retry/lower error level so it d...
Yuri Weinstein

04/17/2018

07:02 PM Bug #23526 (Resolved): "Message too long" error when appending journal
Nathan Cutler
07:02 PM Backport #23545 (Resolved): luminous: "Message too long" error when appending journal
Nathan Cutler
04:28 PM Backport #23545: luminous: "Message too long" error when appending journal
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21216
merged
Yuri Weinstein

04/15/2018

01:54 AM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
@Jason Dillaman, OK,got it.Thank you! liuzhong chen

04/13/2018

03:12 PM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
liuzhong chen wrote:
> IO,it would not be flushed.It also means that these entry will be not replayed for non-primar...
Jason Dillaman
02:35 PM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
liuzhong chen wrote:
> @Jason Dillaman, OK,understanded.Thank you very much.
> Another quetion, this issue http://t...
liuzhong chen
02:34 PM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
@Jason Dillaman, OK,understanded.Thank you very much.
Another quetion, this issue [[http://tracker.ceph.com/issues/2...
liuzhong chen
01:21 PM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
@liuzhong chen: There is no such support right now. The "entries_behind_master" is just a simple counter of the numbe... Jason Dillaman
03:00 AM Support #23677: rbd mirror: is there a method to calculate data size that have not been mirror to...
@Jason Dillaman,I have tried some method but can't.Do you have any advise.Thank you! liuzhong chen
01:20 PM Subtask #18788 (Resolved): rbd-mirror A/A: integrate distribution policy with proxied InstanceRep...
*PR*: https://github.com/ceph/ceph/pull/21300 Jason Dillaman

04/12/2018

12:26 PM Support #23677 (Closed): rbd mirror: is there a method to calculate data size that have not been ...
When I use rbd mirror between two cluster, is there a method to calculate data size that have not been mirror to non-... liuzhong chen
11:19 AM Backport #23640 (In Progress): luminous: rbd: import with option --export-format fails to protect...
Already included in backport PR https://github.com/ceph/ceph/pull/21316 (along with tracker http://tracker.ceph.com/... Prashant D

04/11/2018

03:09 PM Bug #23597 (Resolved): fsx writethrough test case failures
Mykola Golub
12:15 PM Bug #23629 (Closed): RBD corruption after power off
Great, no worries. Jason Dillaman
08:55 AM Bug #23629: RBD corruption after power off
@Jason: my bad, this helped, thanks a lot. We went from Jewel to Luminous, so we skipped Kraken altogether, that's wh... Josef Zelenka

04/10/2018

09:25 PM Bug #23499 (Resolved): test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
09:25 PM Backport #23507 (Resolved): luminous: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
07:56 PM Backport #23507: luminous: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21124
merged
Yuri Weinstein
09:24 PM Bug #23502 (Resolved): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh...
Nathan Cutler
09:24 PM Backport #23524 (Resolved): luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dyna...
Nathan Cutler
07:56 PM Backport #23524: luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_feature...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21192
merged
Yuri Weinstein
09:24 PM Bug #23528 (Resolved): rbd-nbd: EBUSY when do map
Nathan Cutler
09:23 PM Backport #23542 (Resolved): luminous: rbd-nbd: EBUSY when do map
Nathan Cutler
07:55 PM Backport #23542: luminous: rbd-nbd: EBUSY when do map
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21230
merged
Yuri Weinstein
07:49 PM Bug #23629 (Need More Info): RBD corruption after power off
@Josef: this sounds like your images have the exclusive-lock feature enabled but your OpenStack Ceph user does not ha... Jason Dillaman
03:12 PM Bug #23629 (Closed): RBD corruption after power off
Hello,
we have ran into a nasty bug regarding RBD in Ceph Luminous - we have encountered this across multiple diffe...
Josef Zelenka
05:54 PM Backport #23640 (Resolved): luminous: rbd: import with option --export-format fails to protect sn...
https://github.com/ceph/ceph/pull/21316 Nathan Cutler
05:53 PM Backport #23631 (Resolved): luminous: python bindings fixes and improvements
https://github.com/ceph/ceph/pull/21725 Nathan Cutler
05:46 PM Bug #23609 (Pending Backport): python bindings fixes and improvements
Mykola Golub
12:43 PM Backport #23604: luminous: Discard ops should flush affected objects from in-memory cache
@Prashant: just assign this one to me and I'll handle the luminous backport. Jason Dillaman
01:55 AM Backport #23604 (Need More Info): luminous: Discard ops should flush affected objects from in-mem...
The src/librbd/cache/ObjectCacherObjectDispatch.cc file is missing in luminous branch. We need to merge this file to ... Prashant D
12:33 PM Bug #23038 (Pending Backport): rbd: import with option --export-format fails to protect snapshot
Jason Dillaman
03:42 AM Backport #23607 (In Progress): luminous: import-diff failed: (33) Numerical argument out of domai...
https://github.com/ceph/ceph/pull/21316 Prashant D

04/09/2018

03:35 PM Bug #23597 (Fix Under Review): fsx writethrough test case failures
Jason Dillaman
03:35 PM Bug #23597: fsx writethrough test case failures
*PR*: https://github.com/ceph/ceph/pull/21308 Jason Dillaman
10:33 AM Bug #23597: fsx writethrough test case failures
Actually, retesting different option combinations, it looks like only `rbd cache = true` is important. Mykola Golub
02:53 PM Bug #23609 (Fix Under Review): python bindings fixes and improvements
Ricardo Dias
02:53 PM Bug #23609: python bindings fixes and improvements
PR: https://github.com/ceph/ceph/pull/21304 Ricardo Dias
02:53 PM Bug #23609 (Resolved): python bindings fixes and improvements
The current RBD python bindings has the following issues:
* dealing with data_pool string fails in python 3
* tim...
Ricardo Dias
02:31 PM Backport #23608 (Closed): jewel: import-diff failed: (33) Numerical argument out of domain - if i...
Nathan Cutler
02:31 PM Backport #23607 (Resolved): luminous: import-diff failed: (33) Numerical argument out of domain -...
https://github.com/ceph/ceph/pull/21316 Nathan Cutler
02:29 PM Backport #23605 (Closed): jewel: Discard ops should flush affected objects from in-memory cache
Nathan Cutler
02:29 PM Backport #23604 (Resolved): luminous: Discard ops should flush affected objects from in-memory cache
https://github.com/ceph/ceph/pull/23594 Nathan Cutler
01:22 PM Bug #18844 (Pending Backport): import-diff failed: (33) Numerical argument out of domain - if ima...
Mykola Golub

04/08/2018

08:31 PM Bug #23597 (In Progress): fsx writethrough test case failures
Per Mykola:... Jason Dillaman
08:05 PM Bug #23597 (Resolved): fsx writethrough test case failures
http://qa-proxy.ceph.com/teuthology/yuriw-2018-04-05_21:02:08-rbd-wip-yuriw-master-4.5.18-distro-basic-smithi/2358072... Jason Dillaman
08:24 PM Bug #23548 (Pending Backport): Discard ops should flush affected objects from in-memory cache
Mykola Golub
08:03 PM Bug #21815 (Resolved): librbd: cannot copy all image-metas if we have more than 64 key/value pairs
Nathan Cutler
08:03 PM Backport #22394 (Resolved): jewel: librbd: cannot copy all image-metas if we have more than 64 ke...
Nathan Cutler
04:37 PM Backport #23546 (Resolved): jewel: "Message too long" error when appending journal
Nathan Cutler
04:26 PM Backport #23543 (Resolved): jewel: rbd-nbd: EBUSY when do map
Nathan Cutler
04:25 PM Bug #21814 (Resolved): librbd: cannot clone all image-metas if we have more than 64 key/value pairs
Nathan Cutler
04:25 PM Backport #22396 (Resolved): jewel: librbd: cannot clone all image-metas if we have more than 64 k...
Nathan Cutler
04:24 PM Bug #21319 (Resolved): [cli] mirror "getter" commands will fail if mirroring has never been enabled
Nathan Cutler
04:24 PM Backport #21442 (Resolved): jewel: [cli] mirror "getter" commands will fail if mirroring has neve...
Nathan Cutler
04:23 PM Bug #20571 (Resolved): rbd-mirror: cluster watcher should ignore -EPERM errors against reading 'r...
Nathan Cutler
04:23 PM Backport #20637 (Resolved): jewel: rbd-mirror: cluster watcher should ignore -EPERM errors agains...
Nathan Cutler
04:23 PM Bug #21670 (Resolved): Possible deadlock in 'list_children' when refresh is required
Nathan Cutler
04:22 PM Backport #21689 (Resolved): jewel: Possible deadlock in 'list_children' when refresh is required
Nathan Cutler
04:22 PM Bug #21894 (Resolved): [rbd-mirror] peer cluster connections should filter out command line optio...
Nathan Cutler
04:22 PM Backport #21915 (Resolved): jewel: [rbd-mirror] peer cluster connections should filter out comman...
Nathan Cutler
04:21 PM Bug #22716 (Resolved): rbd snap create/rm takes 60s long
Nathan Cutler
04:20 PM Backport #22810 (Resolved): jewel: rbd snap create/rm takes 60s long
Nathan Cutler
04:20 PM Bug #21797 (Resolved): [object map] removing a large image (~100TB) with an object map may result...
Nathan Cutler
04:20 PM Backport #21867 (Resolved): jewel: [object map] removing a large image (~100TB) with an object ma...
Nathan Cutler
04:19 PM Bug #23285 (Resolved): parent blocks are still seen after a whole-object discard
Nathan Cutler
04:19 PM Backport #23305 (Resolved): jewel: parent blocks are still seen after a whole-object discard
Nathan Cutler
04:18 PM Backport #23525 (Resolved): jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic...
Nathan Cutler
04:17 PM Bug #22945 (Resolved): [journal] allocating a new tag after acquiring the lock should use on-disk...
Nathan Cutler
04:16 PM Backport #23012 (Resolved): jewel: [journal] allocating a new tag after acquiring the lock should...
Nathan Cutler
04:15 PM Bug #22485 (Resolved): [test] rbd-mirror split brain test case can have a false-positive failure ...
Nathan Cutler
04:15 PM Backport #22578 (Resolved): jewel: [test] rbd-mirror split brain test case can have a false-posit...
Nathan Cutler
04:12 PM Backport #23508 (Resolved): jewel: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
04:11 PM Bug #23068 (Resolved): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
Nathan Cutler
03:29 PM Backport #23153 (Resolved): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
Nathan Cutler

04/05/2018

10:49 PM Bug #22872: "rbd trash purge --threshold" should support data pool
@Mahati: I would think it should just be implicit. It could loop through all the trashed images whose deferment end t... Jason Dillaman

04/04/2018

06:56 PM Bug #18844 (Fix Under Review): import-diff failed: (33) Numerical argument out of domain - if ima...
*PR*: https://github.com/ceph/ceph/pull/21249 Jason Dillaman
06:15 PM Bug #18844 (In Progress): import-diff failed: (33) Numerical argument out of domain - if image si...
If the 'rbd export-diff' is corrupt, it can result in that error:... Jason Dillaman
05:20 PM Bug #23548 (Fix Under Review): Discard ops should flush affected objects from in-memory cache
*PR*: https://github.com/ceph/ceph/pull/21248 Jason Dillaman
01:05 AM Bug #23548 (Resolved): Discard ops should flush affected objects from in-memory cache
When using the in-memory cache in writeback mode, it's possible that an overlapping discard immediately after a write... Jason Dillaman
09:39 AM Bug #22872: "rbd trash purge --threshold" should support data pool
Does it mean support an option like:
rbd trash purge <pool-name> --threshold '<x>' --data-pool=<data-pool-name>
...
Mahati Chamarthy
06:47 AM Feature #23550 (Resolved): [group]add rbd group snap rollback CLI/API
Maybe we need `group snap rollback` method. wb song
06:08 AM Backport #23543 (In Progress): jewel: rbd-nbd: EBUSY when do map
https://github.com/ceph/ceph/pull/21232 Prashant D
02:41 AM Backport #23542 (In Progress): luminous: rbd-nbd: EBUSY when do map
https://github.com/ceph/ceph/pull/21230 Prashant D

04/03/2018

11:35 PM Backport #22396 (In Progress): jewel: librbd: cannot clone all image-metas if we have more than 6...
Jason Dillaman
09:08 AM Backport #22396 (Need More Info): jewel: librbd: cannot clone all image-metas if we have more tha...
non-trivial backport - not clear what is to be done, since the master PR touches files [1] that don't exist in jewel
...
Nathan Cutler
11:10 PM Backport #21442 (In Progress): jewel: [cli] mirror "getter" commands will fail if mirroring has n...
Jason Dillaman
10:01 PM Backport #20637 (In Progress): jewel: rbd-mirror: cluster watcher should ignore -EPERM errors aga...
Jason Dillaman
09:44 PM Backport #21689 (In Progress): jewel: Possible deadlock in 'list_children' when refresh is required
Jason Dillaman
09:41 PM Backport #21915 (In Progress): jewel: [rbd-mirror] peer cluster connections should filter out com...
Jason Dillaman
03:57 PM Backport #22810 (In Progress): jewel: rbd snap create/rm takes 60s long
Jason Dillaman
03:57 PM Backport #21867 (In Progress): jewel: [object map] removing a large image (~100TB) with an object...
Jason Dillaman
03:56 PM Backport #23305 (In Progress): jewel: parent blocks are still seen after a whole-object discard
Jason Dillaman
09:21 AM Backport #23305 (Need More Info): jewel: parent blocks are still seen after a whole-object discard
backport request unclear - master PR touches code that doesn't exist in jewel Nathan Cutler
02:15 PM Backport #23545 (In Progress): luminous: "Message too long" error when appending journal
Nathan Cutler
11:56 AM Backport #23545 (Resolved): luminous: "Message too long" error when appending journal
https://github.com/ceph/ceph/pull/21216 Nathan Cutler
02:13 PM Backport #23546 (In Progress): jewel: "Message too long" error when appending journal
Nathan Cutler
11:56 AM Backport #23546 (Resolved): jewel: "Message too long" error when appending journal
https://github.com/ceph/ceph/pull/21215 Nathan Cutler
01:21 PM Cleanup #17891 (Resolved): Creation of rbd image with format 1 should be disallowed
Jason Dillaman
11:56 AM Backport #23543 (Resolved): jewel: rbd-nbd: EBUSY when do map
https://github.com/ceph/ceph/pull/21232 Nathan Cutler
11:56 AM Backport #23542 (Resolved): luminous: rbd-nbd: EBUSY when do map
https://github.com/ceph/ceph/pull/21230 Nathan Cutler
11:39 AM Bug #23528: rbd-nbd: EBUSY when do map
*PR*: https://github.com/ceph/ceph/pull/21142 Jason Dillaman
11:39 AM Bug #23528 (Pending Backport): rbd-nbd: EBUSY when do map
Jason Dillaman
11:38 AM Bug #23526 (Pending Backport): "Message too long" error when appending journal
Jason Dillaman
09:23 AM Backport #23525 (In Progress): jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dyna...
Nathan Cutler
09:17 AM Backport #23012 (In Progress): jewel: [journal] allocating a new tag after acquiring the lock sho...
Nathan Cutler
09:13 AM Backport #22578 (In Progress): jewel: [test] rbd-mirror split brain test case can have a false-po...
Nathan Cutler
09:00 AM Backport #22394 (In Progress): jewel: librbd: cannot copy all image-metas if we have more than 64...
Nathan Cutler
03:26 AM Backport #23524 (In Progress): luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_d...
https://github.com/ceph/ceph/pull/21192 Prashant D
03:18 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
Thank you! This method can flush it.But now need to write a script to judge the mirror is over automatic,so I wonder ... liuzhong chen

04/02/2018

03:48 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
We'll get this status artifact fixed. You can use the admin socket to the rbd-mirror daemon to force a flush if desir... Jason Dillaman
02:17 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
Another question is if I use fio to write for 60s.Is there any other parameter to indicate that the mirror is over?
...
liuzhong chen
02:09 AM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
Thank you! @Jason Dillaman
I have anther two question.I find the non-primary image used volumes is equal to primary ...
liuzhong chen

03/31/2018

07:14 PM Bug #23526 (Fix Under Review): "Message too long" error when appending journal
PR: https://github.com/ceph/ceph/pull/21157 Mykola Golub

03/30/2018

08:36 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
Nathan Cutler
08:35 PM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
Nathan Cutler
08:35 PM Bug #11502 (Resolved): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
Nathan Cutler
08:34 PM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
Nathan Cutler
08:33 PM Bug #22120 (Resolved): possible deadlock in various maintenance operations
Nathan Cutler
08:33 PM Backport #22175 (Resolved): jewel: possible deadlock in various maintenance operations
Nathan Cutler
02:08 PM Bug #23528 (Resolved): rbd-nbd: EBUSY when do map
When doing rbd-nbd map, if the Ceph service is not available,
the codes will wait on rados.connect(), unless killing...
Li Wang
02:00 PM Bug #23516: [rbd-mirror] entries_behind_master will not be zero after mirror over
This is actually just an artifact of IO batching within rbd-mirror. The commit position is only updated after every 3... Jason Dillaman
02:57 AM Bug #23516 (Resolved): [rbd-mirror] entries_behind_master will not be zero after mirror over
I have two ceph-12.2.4 cluster. Rbd mirror deamon run on cluster two. poolclz/rbdclz on cluster one is primary image ... liuzhong chen
07:17 AM Bug #23526 (Resolved): "Message too long" error when appending journal
When appending to a journal object the number of appends sent in one rados operation is not limited and we may hit os... Mykola Golub
05:04 AM Backport #23525 (Resolved): jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic...
https://github.com/ceph/ceph/pull/21207 Nathan Cutler
05:04 AM Backport #23524 (Resolved): luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dyna...
https://github.com/ceph/ceph/pull/21192 Nathan Cutler

03/29/2018

08:22 PM Feature #23515 (New): [api] lock_acquire should expose setting the optional lock description
This lock description can be used by iSCSI to describe the lock owner in a manner which can be programatically interp... Jason Dillaman
08:20 PM Feature #23514 (New): [api] image-meta needs to support compare-and-write operation
iSCSI would like to re-use image-meta to store port state and persistent group reservations. In the case of PGRs, it ... Jason Dillaman
07:45 PM Bug #23502 (Pending Backport): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
Jason Dillaman
07:01 PM Bug #23502 (Fix Under Review): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_fea...
PR: https://github.com/ceph/ceph/pull/21131 Mykola Golub
10:22 AM Bug #23502 (In Progress): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features...
Mykola Golub
08:34 AM Bug #23502 (Resolved): is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh...
http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330979... Mykola Golub
03:44 PM Bug #23512 (Resolved): Allow removal of RBD images even if the journal is corrupt
Allow removal of RBD images even if the journal is corrupt
Red Hat bug - https://bugzilla.redhat.com/show_bug.cgi?id...
Vikhyat Umrao
02:55 PM Bug #18768 (Closed): rbd rm on empty volumes 2/3 sec per volume
Jason Dillaman
02:26 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
This issue was submitted against kernel RBD in Ceph Jewel, but the kernel RBD implementation has changed. The object... Ben England
12:25 PM Backport #23508 (In Progress): jewel: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
12:22 PM Backport #23508 (Resolved): jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21125 Nathan Cutler
12:24 PM Backport #23507 (In Progress): luminous: test_admin_socket.sh may fail on wait_for_clean
Nathan Cutler
12:22 PM Backport #23507 (Resolved): luminous: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21124 Nathan Cutler
12:18 PM Bug #23499 (Pending Backport): test_admin_socket.sh may fail on wait_for_clean
Jason Dillaman
09:40 AM Bug #23499 (Fix Under Review): test_admin_socket.sh may fail on wait_for_clean
PR: https://github.com/ceph/ceph/pull/21116 Mykola Golub
08:11 AM Bug #23499 (Resolved): test_admin_socket.sh may fail on wait_for_clean
See e.g: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smith... Mykola Golub
12:14 PM Feature #23505: rbd zero copy
Hi Jason,
you mean by the following commands?
rbd snap create
rbd snap protect
rbd clone
rbd flatten
rbd s...
Stefan Kooman
11:54 AM Feature #23505: rbd zero copy
This is basically implemented via "rbd clone". Jason Dillaman
10:03 AM Feature #23505 (Rejected): rbd zero copy
rbd zero copy would just ask the Ceph cluster to "copy $rbd image" to a new image. This saves a lot of bandwith back ... Stefan Kooman
 

Also available in: Atom