Project

General

Profile

Activity

From 02/01/2017 to 03/02/2017

03/02/2017

07:52 PM Bug #19130 (Fix Under Review): Enabling mirroring for a pool wiht clones may fail
PR: https://github.com/ceph/ceph/pull/13752 Mykola Golub
01:53 PM Bug #19130 (Resolved): Enabling mirroring for a pool wiht clones may fail
When enabling RBD mirroring within a pool (rbd.mirror_mode_set(ioctx, RBD_MIRROR_MODE_POOL)), it tries to enable mirr... Mykola Golub
07:47 PM Cleanup #19104 (Fix Under Review): [test] librados_test_stub should support multiple connections
*PR*: https://github.com/ceph/ceph/pull/13737 Jason Dillaman
07:47 PM Cleanup #19010 (Resolved): Simplify asynchronous image close behavior
Jason Dillaman
06:51 PM Subtask #18784 (Resolved): rbd-mirror A/A: leader should track up/down rbd-mirror instances
Mykola Golub
06:46 AM Bug #19128 (Resolved): rbd import needs to sanity check auto-generated image name
I see a qa :
[root@lab8106 ~]# rbd import /bin/ls ls@snap
rbd: destination snapname specified for a command that d...
peng zhang

03/01/2017

07:31 PM Bug #18938: Unable to build 11.2.0 under i686
Hello,
I almost have the same problem but on an ARM platform....
Romain Gobinet
07:15 PM Feature #19123 (New): rbd/rados drivers in PyPI repo
We have a need to install modules wholly contained within python virtualenv. As of now, we're extracting compiled dri... Brian Andrus

02/28/2017

08:19 PM Cleanup #19104 (In Progress): [test] librados_test_stub should support multiple connections
Jason Dillaman
03:02 AM Cleanup #19104 (Resolved): [test] librados_test_stub should support multiple connections
For tests where client ids need to be unique or where blacklisting is required, the librados_test_stub should be able... Jason Dillaman
05:00 PM Cleanup #19010 (Fix Under Review): Simplify asynchronous image close behavior
*PR*: https://github.com/ceph/ceph/pull/13701 Jason Dillaman
02:24 PM Feature #19034 (In Progress): [rbd CLI] import-diff should use concurrent writes
Venky Shankar
01:26 PM Bug #19108 (Fix Under Review): rbd-nbd: prompt message when input nbds_max, and nbd module alread...
PR: https://github.com/ceph/ceph/pull/13694 Mykola Golub
12:58 PM Bug #19108 (Resolved): rbd-nbd: prompt message when input nbds_max, and nbd module already loaded.
when user specify --nbds_max, nbd-rbd will try to load nbd module, and set parm nbds_max. But if the nbd module is al... Pan Liu
07:30 AM Subtask #18784 (Fix Under Review): rbd-mirror A/A: leader should track up/down rbd-mirror instances
PR: https://github.com/ceph/ceph/pull/13571 Mykola Golub
07:29 AM Subtask #18783 (Resolved): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follower RPC
Mykola Golub
03:15 AM Bug #18888 (Pending Backport): rbd_clone_copy_on_read ineffective with exclusive-lock
Venky Shankar

02/27/2017

06:41 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
Yes, for luminous I think we'll have that flag still - mainly because it's a really bad idea to enable on filestore, ... Josh Durgin
02:08 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
@Josh: do you also invision that users will need to set that flag in Luminous -- or should EC overwrites just work ou... Jason Dillaman
01:59 PM Fix #19091 (Need More Info): rbd: rbd du cmd calc total volume is smaller than used
Looks like this was an unintended consequence of commit 1ccdcb5b6c1cfd176a86df4f115a88accc81b4d0. Jason Dillaman
08:37 AM Fix #19091 (Rejected): rbd: rbd du cmd calc total volume is smaller than used
The result which rbd du a snapshot image is good, but rbd du a original image seems unreasonable, because the PROVISI... Tang Jin

02/24/2017

10:12 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
The flag will stick around for luminous. In the future if all ec pools supported overwrites, the flag would just alwa... Josh Durgin
09:06 PM Bug #19081 (Need More Info): rbd: refuse to use an ec pool that doesn't support overwrites
@Josh: what's the API for determining if that flag is set? Is that flag only valid for Kraken? Jason Dillaman
09:00 PM Bug #19081 (Resolved): rbd: refuse to use an ec pool that doesn't support overwrites
When using an ec data pool that does not have the overwrites flag set, librbd ends up hitting an assert in the i/o pa... Josh Durgin
07:28 AM Feature #19073 (Duplicate): rbd: support namespace
support namespace in rbd. a design as below.
http://pad.ceph.com/p/rbd_namespace
Yang Dongsheng
03:28 AM Feature #19072: rbd-fuse support rbd image snap
@jason dillaman Anonymous
03:26 AM Feature #19072 (New): rbd-fuse support rbd image snap
Currently, the rbd-fuse do not support to mount image snap.
We can add this feature to rbd-fuse..
Anonymous

02/23/2017

01:36 PM Bug #19057 (Won't Fix): krbd suite does not run on hammer (rbd task fails with "No route to host")
Reproducer: ... Nathan Cutler
11:05 AM Feature #18865: rbd: wipe data in disk in rbd removing
Okey, makes sense. will investigate more about it in osd. thanx
Jason Dillaman wrote:
> @Yang: As I mentioned, th...
Yang Dongsheng
09:28 AM Bug #18990 (Pending Backport): [rbd-mirror] deleting a snapshot during sync can result in read er...
Mykola Golub

02/22/2017

07:10 PM Backport #19038 (In Progress): jewel: [rbd-mirror] deleting a snapshot during sync can result in ...
Jason Dillaman
06:51 PM Backport #19038 (Resolved): jewel: [rbd-mirror] deleting a snapshot during sync can result in rea...
https://github.com/ceph/ceph/pull/13596 Jason Dillaman
07:00 PM Backport #18215 (Closed): jewel: TestImageSync.SnapshotStress fails on bluestore
I would like to avoid backporting sparse object reads to jewel unless required. Jason Dillaman
07:00 PM Bug #18146 (Resolved): TestImageSync.SnapshotStress fails on bluestore
Jason Dillaman
07:00 PM Feature #16780 (Resolved): rbd-mirror: use sparse read during image sync
Jason Dillaman
07:00 PM Backport #17879 (Closed): jewel: rbd-mirror: use sparse read during image sync
I would like to avoid backporting sparse object reads to jewel unless required. Jason Dillaman
06:50 PM Backport #19037 (Resolved): kraken: rbd-mirror: deleting a snapshot during sync can result in rea...
https://github.com/ceph/ceph/pull/14622 Jason Dillaman
05:54 PM Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
Jason Dillaman
03:42 PM Bug #19035 (Resolved): [rbd CLI] map with cephx disabled results in error message
... Jason Dillaman
03:38 PM Feature #19034 (Resolved): [rbd CLI] import-diff should use concurrent writes
The export, export-diff, and import commands all issue concurrent operations to the librbd API. The import-diff comma... Jason Dillaman
02:15 PM Bug #17251 (Resolved): Potential seg fault when blacklisting a client
Nathan Cutler
01:44 PM Backport #17261 (Resolved): jewel: Potential seg fault when blacklisting a client
The patch has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https:/... Alexey Sheplyakov
01:34 PM Bug #17210 (Resolved): ImageWatcher: double unwatch of failed watch handle
Nathan Cutler
01:27 PM Backport #17242 (Resolved): jewel: ImageWatcher: double unwatch of failed watch handle
This one has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https://... Alexey Sheplyakov

02/21/2017

08:57 PM Bug #18990 (Fix Under Review): [rbd-mirror] deleting a snapshot during sync can result in read er...
*PR*: https://github.com/ceph/ceph/pull/13568 Jason Dillaman
02:17 PM Backport #18668 (Resolved): kraken: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade...
Mykola Golub
02:17 PM Backport #18703 (Resolved): kraken: Prevent librbd from blacklisting the in-use librados client
Mykola Golub

02/20/2017

08:03 PM Feature #13025 (Resolved): Add scatter/gather support to librbd C/C++ APIs
Mykola Golub
01:55 PM Cleanup #19010 (Resolved): Simplify asynchronous image close behavior
Currently, an image cannot be closed when invoked from the image's op work queue nor can the image's memory be releas... Jason Dillaman
10:59 AM Backport #18285 (Resolved): jewel: partition func should be enabled When load nbd.ko for rbd-nbd
Loïc Dachary

02/19/2017

11:56 PM Bug #18990 (Resolved): [rbd-mirror] deleting a snapshot during sync can result in read errors
Given an image with zero snapshots and some data written to object X, if you create a snapshot, start a full rbd-mirr... Jason Dillaman
07:57 PM Bug #18982: How to get out of weird situation after rbd flatten?
The affected Ceph version as assigned to the ticket: 0.94.7. Kernel (on Ceph hosts) is 4.4.27 (soon to be updated to ... Christian Theune
06:55 PM Bug #18982: How to get out of weird situation after rbd flatten?
Please write Ceph and Kernel versions your cluster running. Shinobu Kinjo

02/18/2017

10:59 PM Bug #18987 (Won't Fix): "[ FAILED ] TestLibRBD.ExclusiveLock" in upgrade:client-upgrade-kraken-...
Run: http://pulpito.ceph.com/teuthology-2017-02-17_22:07:49-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ...
Yuri Weinstein
11:23 AM Feature #18984 (New): RFE: let rbd export write directly to a block device
It would be great if `rbd export` could write directly to a block device.
Right now it won't let you:
# rbd expo...
Ruben Kerkhof

02/17/2017

10:21 PM Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
Hope this is good for the tracker instead of the mailing list...
We have an image that was cloned from a snapshot:...
Christian Theune
02:49 PM Backport #18971 (Resolved): jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping
https://github.com/ceph/ceph/pull/14701 Loïc Dachary
02:49 PM Backport #18970 (Resolved): kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping
https://github.com/ceph/ceph/pull/14540 Loïc Dachary
07:54 AM Bug #17951 (Pending Backport): AdminSocket::bind_and_listen failed after rbd-nbd mapping
PR: https://github.com/ceph/ceph/pull/12433 Mykola Golub

02/16/2017

10:38 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
The individual ImageReplayers are stuck in the STOPPING state, trying to stop the replay of the remote journal. Due t... Jason Dillaman
10:12 PM Bug #18963 (Resolved): rbd-mirror: forced failover does not function when peer is unreachable
When a local image is force promoted to primary, the local rbd-mirror daemon should detect that the local images are ... Jason Dillaman

02/15/2017

11:55 PM Feature #13025: Add scatter/gather support to librbd C/C++ APIs
*PR*: https://github.com/ceph/ceph/pull/13447 Jason Dillaman
10:54 PM Backport #18948 (Resolved): jewel: rbd-mirror: additional test stability improvements
https://github.com/ceph/ceph/pull/14154 Loïc Dachary
10:54 PM Backport #18947 (Resolved): kraken: rbd-mirror: additional test stability improvements
https://github.com/ceph/ceph/pull/14155 Loïc Dachary
10:47 PM Backport #18556 (Resolved): jewel: Potential race when removing two-way mirroring image
Loïc Dachary
10:47 PM Backport #18608 (Resolved): jewel: Removing a clone that fails to open its parent might leave dan...
Loïc Dachary
02:41 PM Bug #18935 (Pending Backport): rbd-mirror: additional test stability improvements
Mykola Golub
12:56 AM Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
Hello,
The ceph 11.2.0 tarball fail to build under i686 architecture when it succeeds under x86_64.
Here is my ...
Sebastien Luttringer

02/14/2017

08:59 PM Bug #18935 (Fix Under Review): rbd-mirror: additional test stability improvements
*PR*: https://github.com/ceph/ceph/pull/13421 Jason Dillaman
08:57 PM Bug #18935 (Resolved): rbd-mirror: additional test stability improvements
Jason Dillaman
09:02 AM Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
Whops - I forgot that one line. It is basically the same as in the validate case.
These are the all steps to repro...
Bernhard J. M. Grün
07:34 AM Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
how do you create vms/test-larger? Anonymous

02/13/2017

09:32 PM Documentation #17978 (Resolved): Wrong diskcache parameter name for OpenStack Havana and Icehouse
Jason Dillaman
08:20 PM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
I've opened a pull request: https://github.com/ceph/ceph/pull/13403
@Jason: The documentation fix doesn't apply to...
Michael Eischer
03:12 PM Feature #18865: rbd: wipe data in disk in rbd removing
@Yang: As I mentioned, there is no way for librbd to overwrite snapshot objects -- they are read-only from the point-... Jason Dillaman
06:49 AM Feature #18865: rbd: wipe data in disk in rbd removing
Jason Dillaman wrote:
> @Yang: can you provide more background on your intended request use-case? If you are trying ...
Yang Dongsheng
01:52 PM Subtask #18785 (In Progress): rbd-mirror A/A: separate ImageReplayer handling from Replayer
Mykola Golub
10:44 AM Feature #18917 (New): rbd: show the latest snapshot in rbd info
When we do a snapshot rollback, we want to know what the snapshot the current head is based on. Yang Dongsheng
07:24 AM Backport #18911 (Resolved): jewel: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
Loïc Dachary
07:24 AM Backport #18910 (Resolved): kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped c...
https://github.com/ceph/ceph/pull/14540 Loïc Dachary
07:22 AM Backport #18893 (Resolved): jewel: Incomplete declaration for ContextWQ in librbd/Journal.h
https://github.com/ceph/ceph/pull/14152 Loïc Dachary
07:22 AM Backport #18892 (Resolved): kraken: Incomplete declaration for ContextWQ in librbd/Journal.h
https://github.com/ceph/ceph/pull/14153 Loïc Dachary

02/12/2017

05:45 AM Bug #18888 (Fix Under Review): rbd_clone_copy_on_read ineffective with exclusive-lock
PR: https://github.com/ceph/ceph/pull/13196 Venky Shankar
05:10 AM Bug #18888 (In Progress): rbd_clone_copy_on_read ineffective with exclusive-lock
Venky Shankar
05:10 AM Bug #18888 (Resolved): rbd_clone_copy_on_read ineffective with exclusive-lock
With layering+exclusive-lock feature, rbd_clone_copy_on_read does not trigger object copyups from parent image. This ... Venky Shankar

02/11/2017

02:29 PM Feature #18335 (Pending Backport): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
Jason Dillaman

02/10/2017

06:12 PM Bug #18884: systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/rbdmap
I have a fix which uses an RBDMAP_UNMAP_ALL parameter in /etc/sysconfig/ceph to control whether all RBD images (if "y... David Disseldorp
06:04 PM Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
Copy of downstream bug report:
When stopping the service rbdmap it unmaps ALL mapped RBDs instead just unmapping t...
David Disseldorp
02:24 PM Bug #17913: librbd io deadlock after host lost network connectivity
@Dan van der Ster:
If you can install all necessary debug packages and get a complete gdb core backtrace via "thre...
Jason Dillaman
02:22 PM Bug #18839 (Resolved): fsx segfault on clone op
Jason Dillaman
02:19 PM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
@Michael: note that Icehouse and Havana are both EOLed by the upstream community. Does this issue apply to Grizzly+ r... Jason Dillaman
10:44 AM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
@Michael: Can you open a PR at https://github.com/ceph/ceph with your proposed fix? The documentation is under doc/ Nathan Cutler
09:58 AM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
*Ping* Any progress on this? Michael Eischer
02:17 PM Feature #18865 (Need More Info): rbd: wipe data in disk in rbd removing
@Yang: can you provide more background on your intended request use-case? If you are trying to implement a secure del... Jason Dillaman
01:59 PM Bug #18862 (Pending Backport): Incomplete declaration for ContextWQ in librbd/Journal.h
*PR*: https://github.com/ceph/ceph/pull/13322 Jason Dillaman

02/09/2017

10:22 AM Feature #18864: rbd export/import for consistent group
it should be a feature not a bug. Yang Dongsheng
10:15 AM Feature #18864 (New): rbd export/import for consistent group
Yang Dongsheng
10:18 AM Feature #18865: rbd: wipe data in disk in rbd removing
it should be a feature instead of bug. Yang Dongsheng
10:16 AM Feature #18865 (Rejected): rbd: wipe data in disk in rbd removing
Yang Dongsheng
10:14 AM Feature #18863 (New): rbd export/import improvement.
snap timestamp for each diff, and add a crc check for it. Yang Dongsheng
09:25 AM Bug #18862 (Fix Under Review): Incomplete declaration for ContextWQ in librbd/Journal.h
PR: https://github.com/ceph/ceph/pull/13322 Mykola Golub
08:43 AM Bug #18862 (Resolved): Incomplete declaration for ContextWQ in librbd/Journal.h
There is an incomplete declaration for ContextWQ and we call its method in Journal<I>::MetadataListener::handle_updat... Boris Ranto

02/08/2017

07:31 PM Subtask #18753 (In Progress): rbd-mirror HA: create teuthology thrasher for rbd-mirror
Mykola Golub
02:18 PM Subtask #18784 (In Progress): rbd-mirror A/A: leader should track up/down rbd-mirror instances
Mykola Golub
02:17 PM Subtask #18783 (Fix Under Review): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/f...
PR: https://github.com/ceph/ceph/pull/13312 Mykola Golub

02/07/2017

12:14 PM Bug #18844 (Resolved): import-diff failed: (33) Numerical argument out of domain - if image size ...
*Steps to setup the test case (create a basic image):*
rbd create vms/test -s 1G
rbd snap create vms/test@snap
rbd...
Bernhard J. M. Grün
03:40 AM Bug #18839: fsx segfault on clone op
fixed by:
https://github.com/ceph/ceph/pull/13287
Hecheng Gui
03:34 AM Bug #18839 (Resolved): fsx segfault on clone op
exec:
./ceph_test_librbd_fsx -N 1000 rbd fsx -d
segfault:
123 write 0x2398d thru 0x2b8d9 (0x7f4d bytes)...
Hecheng Gui

02/06/2017

02:12 PM Subtask #18783 (In Progress): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follow...
Mykola Golub
01:46 PM Bug #18832 (Won't Fix): "SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())" in upgrade:cli...
Run: http://pulpito.ceph.com/teuthology-2017-02-04_11:45:02-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ...
Yuri Weinstein
09:36 AM Bug #17913: librbd io deadlock after host lost network connectivity
Hi Jason -- our security officer is hesitating to let me post the machine memory dump. Could we meet on IRC and I can... Dan van der Ster

02/05/2017

11:26 PM Backport #18823 (Resolved): jewel: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD....
https://github.com/ceph/ceph/pull/14150 Nathan Cutler
11:26 PM Backport #18822 (Resolved): kraken: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD...
https://github.com/ceph/ceph/pull/14151 Nathan Cutler

02/04/2017

02:13 PM Bug #17447 (Pending Backport): run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.Obje...
Mykola Golub

02/03/2017

11:24 PM Bug #18733 (Rejected): test_rbd.TestImage.test_block_name_prefix and test_rbd.TestImage.test_id f...
Nathan Cutler
01:53 PM Bug #17913: librbd io deadlock after host lost network connectivity
@Dan van der Ster:
Please use the "ceph-post-file" utility to upload the core dump along with a listing of install...
Jason Dillaman
10:48 AM Bug #17913: librbd io deadlock after host lost network connectivity
It happened again:... Dan van der Ster
12:46 PM Backport #18456 (In Progress): kraken: Attempting to remove an image w/ incompatible features res...
Nathan Cutler
12:45 PM Backport #18454 (In Progress): hammer: Attempting to remove an image w/ incompatible features res...
Nathan Cutler
12:15 PM Backport #18776 (In Progress): kraken: Qemu crash triggered by network issues
Nathan Cutler
12:14 PM Backport #18775 (In Progress): jewel: Qemu crash triggered by network issues
Nathan Cutler
12:08 PM Backport #18774 (In Progress): hammer: Qemu crash triggered by network issues
Nathan Cutler
12:03 PM Backport #14824 (Need More Info): hammer: rbd and pool quota do not go well together
Nathan Cutler

02/02/2017

06:25 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
But you were totally correct about object-map feature - this cuts time of removal from approx 0.4 sec to approx 0.12 ... Ben England
06:06 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
@Ben: I wasn't suggesting that "--no-progress" would improve speed, I was responding to your *strong* opinions.
Fo...
Jason Dillaman
06:01 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
while --no-progress worked, it didn't help. And when I tried to follow your suggestion:
# rbd create --size 10G -...
Ben England
01:28 PM Backport #18556 (In Progress): jewel: Potential race when removing two-way mirroring image
Nathan Cutler
01:15 PM Bug #16179 (Resolved): rbd-mirror: image sync object map reload logs message
Nathan Cutler
01:15 PM Bug #18440 (Resolved): [teuthology] update "rbd/singleton/all/formatted-output.yaml" to support c...
Nathan Cutler
01:14 PM Bug #18261 (Resolved): rbd status: json format has duplicated/overwritten key
Nathan Cutler
01:14 PM Bug #18242 (Resolved): rbd-nbd: invalid error code for "failed to read nbd request" messages
Nathan Cutler
01:13 PM Bug #18068 (Resolved): diff calculate can hide parent extents when examining first snapshot in clone
Nathan Cutler
01:13 PM Bug #16176 (Resolved): objectmap does not show object existence correctly
Nathan Cutler
01:12 PM Bug #17973 (Resolved): "FAILED assert(m_processing == 0)" while running test_lock_fence.sh
Nathan Cutler
01:10 PM Bug #18200 (Resolved): RBD diff got SIGABRT with "--whole-object" for RBD whose parent also have ...
Nathan Cutler
01:10 PM Cleanup #16985 (Resolved): Improve error reporting from "rbd feature enable/disable"
Nathan Cutler
10:16 AM Feature #18335 (Fix Under Review): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
PR: https://github.com/ceph/ceph/pull/13229 Mykola Golub
06:28 AM Feature #18594 (Resolved): [teuthology] integrate OpenStack 'gate-tempest-dsvm-full-devstack-plug...
Mykola Golub
12:03 AM Subtask #18789 (Resolved): rbd-mirror A/A: coordinate image syncs with leader
The follower instances should send a "sync start" request to the leader before starting a full image sync. If there a... Jason Dillaman
12:01 AM Subtask #18788 (Resolved): rbd-mirror A/A: integrate distribution policy with proxied InstanceRep...
The leader should map each image via the distribution policy to an up remote instance. For each remote instance, the ... Jason Dillaman
12:00 AM Subtask #18787 (Resolved): rbd-mirror A/A: proxy InstanceReplayer APIs via InstanceWatcher RPC
The leader would instantiate a proxy of InstanceReplayer that invokes InstanceWatcher notification methods for the sp... Jason Dillaman
12:00 AM Subtask #18786 (Resolved): rbd-mirror A/A: create simple image distribution policy
The simple distribution policy should just attempt to assign <number of images> / <number of up instances> to each rb... Jason Dillaman
12:00 AM Subtask #18785 (Resolved): rbd-mirror A/A: separate ImageReplayer handling from Replayer
Create a new interface (i.e. InstanceReplayerInterface) that have API methods for acquire and releasing images by glo... Jason Dillaman

02/01/2017

11:59 PM Subtask #18784 (Resolved): rbd-mirror A/A: leader should track up/down rbd-mirror instances
After acquiring the lock, the leader should read the "rbd_mirror_instances" mapping into memory. When the leader send... Jason Dillaman
11:59 PM Subtask #18783 (Resolved): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follower RPC
On initialization of the pool Replayer, initialize a new InstanceWatcher that adds a record to "rbd_mirror_instances"... Jason Dillaman
10:19 PM Backport #18778 (Resolved): jewel: rbd --pool=x rename y z does not work
https://github.com/ceph/ceph/pull/14148 Nathan Cutler
10:19 PM Backport #18777 (Resolved): kraken: rbd --pool=x rename y z does not work
https://github.com/ceph/ceph/pull/14149 Nathan Cutler
10:19 PM Backport #18776 (Resolved): kraken: Qemu crash triggered by network issues
https://github.com/ceph/ceph/pull/13245 Nathan Cutler
10:19 PM Backport #18775 (Resolved): jewel: Qemu crash triggered by network issues
https://github.com/ceph/ceph/pull/13244 Nathan Cutler
10:19 PM Backport #18774 (Rejected): hammer: Qemu crash triggered by network issues
https://github.com/ceph/ceph/pull/13243 Nathan Cutler
10:19 PM Backport #18771 (Resolved): kraken: rbd: Improve compatibility between librbd + krbd for the data...
https://github.com/ceph/ceph/pull/14539 Nathan Cutler
10:18 PM Backport #18770 (Closed): jewel: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
Nathan Cutler
10:18 PM Backport #18769 (Resolved): kraken: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
https://github.com/ceph/ceph/pull/14147 Nathan Cutler
09:59 PM Bug #18768 (Need More Info): rbd rm on empty volumes 2/3 sec per volume
@Ben:
(1) "These are EMPTY volumes": while they are technically empty, when you create 10G images w/o the object ...
Jason Dillaman
09:36 PM Bug #18768 (Closed): rbd rm on empty volumes 2/3 sec per volume
speed of "rbd rm" command slows to 2/3 second when 10000 RBD volumes are being deleted, but "rbd create" remains belo... Ben England
09:37 PM Cleanup #18186 (Resolved): add max_part and nbds_max options in rbd nbd map, in order to keep con...
Nathan Cutler
09:36 PM Backport #18214 (Resolved): jewel: add max_part and nbds_max options in rbd nbd map, in order to ...
Nathan Cutler
09:14 PM Bug #17227 (Resolved): exclusive_lock::AcquireRequest doesn't handle -ERESTART on image::RefreshR...
Jason Dillaman
09:13 PM Backport #17340 (Resolved): jewel: exclusive_lock::AcquireRequest doesn't handle -ERESTART on ima...
Jason Dillaman
09:11 PM Backport #18337 (Resolved): jewel: Expose librbd API methods to directly acquire and release the ...
Jason Dillaman
08:47 PM Subtask #18767 (Closed): rbd-mirror A/A: rename Replayer to PoolReplayer
Jason Dillaman
08:41 PM Subtask #18767 (Closed): rbd-mirror A/A: rename Replayer to PoolReplayer
This is a better naming convention to denote that this class is responsible for handling pool-level replication. Jason Dillaman
08:47 PM Subtask #18766 (Closed): rbd-mirror A/A: track alive pool peers
Jason Dillaman
08:28 PM Subtask #18766 (Closed): rbd-mirror A/A: track alive pool peers
When the pool leader sends out its periodic heartbeat, the clients ack the message. Use the global id received in the... Jason Dillaman
08:42 PM Subtask #18327 (Resolved): [iscsi]: need an API to break the exclusive lock
Nathan Cutler
08:42 PM Backport #18453 (Resolved): jewel: [iscsi]: need an API to break the exclusive lock
Nathan Cutler
08:39 PM Backport #17261 (New): jewel: Potential seg fault when blacklisting a client
Nathan Cutler
08:39 PM Backport #17243 (New): jewel: Deadlock in several librbd teuthology test cases
Nathan Cutler
08:38 PM Backport #17817 (New): jewel: teuthology: upgrade:client-upgrade import_export.sh test fails
Nathan Cutler
08:38 PM Bug #16773 (Resolved): FAILED assert(m_image_ctx.journal == nullptr)
Nathan Cutler
08:38 PM Backport #17134 (Resolved): jewel: FAILED assert(m_image_ctx.journal == nullptr)
Nathan Cutler
08:14 PM Feature #18765 (Resolved): rbd-mirror: add support for active/active daemon instances
Phase 2:
See http://pad.ceph.com/p/rbd_mirror_scale
Jason Dillaman
05:47 PM Bug #18326 (Pending Backport): rbd --pool=x rename y z does not work
Jason Dillaman
04:42 PM Subtask #17020 (Resolved): rbd-mirror HA: pool replayer should be started/stopped when lock acqui...
Mykola Golub
04:41 PM Subtask #17019 (Resolved): rbd-mirror HA: create pool locker / leader class
Mykola Golub
04:41 PM Subtask #17018 (Resolved): rbd-mirror HA: add new lock released/acquired and heartbeat messages
Mykola Golub
11:50 AM Feature #18123 (Resolved): Need CLI ability to add, edit and remove omap values with binary keys
Nathan Cutler
11:33 AM Backport #18284 (Resolved): jewel: Need CLI ability to add, edit and remove omap values with bina...
Nathan Cutler
10:03 AM Bug #17913: librbd io deadlock after host lost network connectivity
This happened again after a network outage yesterday (again 0.94.9 librbd):... Dan van der Ster
 

Also available in: Atom