Activity
From 01/29/2017 to 02/27/2017
02/27/2017
- 06:41 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- Yes, for luminous I think we'll have that flag still - mainly because it's a really bad idea to enable on filestore, ...
- 02:08 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- @Josh: do you also invision that users will need to set that flag in Luminous -- or should EC overwrites just work ou...
- 01:59 PM Fix #19091 (Need More Info): rbd: rbd du cmd calc total volume is smaller than used
- Looks like this was an unintended consequence of commit 1ccdcb5b6c1cfd176a86df4f115a88accc81b4d0.
- 08:37 AM Fix #19091 (Rejected): rbd: rbd du cmd calc total volume is smaller than used
- The result which rbd du a snapshot image is good, but rbd du a original image seems unreasonable, because the PROVISI...
02/24/2017
- 10:12 PM Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- The flag will stick around for luminous. In the future if all ec pools supported overwrites, the flag would just alwa...
- 09:06 PM Bug #19081 (Need More Info): rbd: refuse to use an ec pool that doesn't support overwrites
- @Josh: what's the API for determining if that flag is set? Is that flag only valid for Kraken?
- 09:00 PM Bug #19081 (Resolved): rbd: refuse to use an ec pool that doesn't support overwrites
- When using an ec data pool that does not have the overwrites flag set, librbd ends up hitting an assert in the i/o pa...
- 07:28 AM Feature #19073 (Duplicate): rbd: support namespace
- support namespace in rbd. a design as below.
http://pad.ceph.com/p/rbd_namespace - 03:28 AM Feature #19072: rbd-fuse support rbd image snap
- @jason dillaman
- 03:26 AM Feature #19072 (New): rbd-fuse support rbd image snap
- Currently, the rbd-fuse do not support to mount image snap.
We can add this feature to rbd-fuse..
02/23/2017
- 01:36 PM Bug #19057 (Won't Fix): krbd suite does not run on hammer (rbd task fails with "No route to host")
- Reproducer: ...
- 11:05 AM Feature #18865: rbd: wipe data in disk in rbd removing
- Okey, makes sense. will investigate more about it in osd. thanx
Jason Dillaman wrote:
> @Yang: As I mentioned, th... - 09:28 AM Bug #18990 (Pending Backport): [rbd-mirror] deleting a snapshot during sync can result in read er...
02/22/2017
- 07:10 PM Backport #19038 (In Progress): jewel: [rbd-mirror] deleting a snapshot during sync can result in ...
- 06:51 PM Backport #19038 (Resolved): jewel: [rbd-mirror] deleting a snapshot during sync can result in rea...
- https://github.com/ceph/ceph/pull/13596
- 07:00 PM Backport #18215 (Closed): jewel: TestImageSync.SnapshotStress fails on bluestore
- I would like to avoid backporting sparse object reads to jewel unless required.
- 07:00 PM Bug #18146 (Resolved): TestImageSync.SnapshotStress fails on bluestore
- 07:00 PM Feature #16780 (Resolved): rbd-mirror: use sparse read during image sync
- 07:00 PM Backport #17879 (Closed): jewel: rbd-mirror: use sparse read during image sync
- I would like to avoid backporting sparse object reads to jewel unless required.
- 06:50 PM Backport #19037 (Resolved): kraken: rbd-mirror: deleting a snapshot during sync can result in rea...
- https://github.com/ceph/ceph/pull/14622
- 05:54 PM Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
- 03:42 PM Bug #19035 (Resolved): [rbd CLI] map with cephx disabled results in error message
- ...
- 03:38 PM Feature #19034 (Resolved): [rbd CLI] import-diff should use concurrent writes
- The export, export-diff, and import commands all issue concurrent operations to the librbd API. The import-diff comma...
- 02:15 PM Bug #17251 (Resolved): Potential seg fault when blacklisting a client
- 01:44 PM Backport #17261 (Resolved): jewel: Potential seg fault when blacklisting a client
- The patch has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https:/...
- 01:34 PM Bug #17210 (Resolved): ImageWatcher: double unwatch of failed watch handle
- 01:27 PM Backport #17242 (Resolved): jewel: ImageWatcher: double unwatch of failed watch handle
- This one has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https://...
02/21/2017
- 08:57 PM Bug #18990 (Fix Under Review): [rbd-mirror] deleting a snapshot during sync can result in read er...
- *PR*: https://github.com/ceph/ceph/pull/13568
- 02:17 PM Backport #18668 (Resolved): kraken: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade...
- 02:17 PM Backport #18703 (Resolved): kraken: Prevent librbd from blacklisting the in-use librados client
02/20/2017
- 08:03 PM Feature #13025 (Resolved): Add scatter/gather support to librbd C/C++ APIs
- 01:55 PM Cleanup #19010 (Resolved): Simplify asynchronous image close behavior
- Currently, an image cannot be closed when invoked from the image's op work queue nor can the image's memory be releas...
- 10:59 AM Backport #18285 (Resolved): jewel: partition func should be enabled When load nbd.ko for rbd-nbd
02/19/2017
- 11:56 PM Bug #18990 (Resolved): [rbd-mirror] deleting a snapshot during sync can result in read errors
- Given an image with zero snapshots and some data written to object X, if you create a snapshot, start a full rbd-mirr...
- 07:57 PM Bug #18982: How to get out of weird situation after rbd flatten?
- The affected Ceph version as assigned to the ticket: 0.94.7. Kernel (on Ceph hosts) is 4.4.27 (soon to be updated to ...
- 06:55 PM Bug #18982: How to get out of weird situation after rbd flatten?
- Please write Ceph and Kernel versions your cluster running.
02/18/2017
- 10:59 PM Bug #18987 (Won't Fix): "[ FAILED ] TestLibRBD.ExclusiveLock" in upgrade:client-upgrade-kraken-...
- Run: http://pulpito.ceph.com/teuthology-2017-02-17_22:07:49-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ... - 11:23 AM Feature #18984 (New): RFE: let rbd export write directly to a block device
- It would be great if `rbd export` could write directly to a block device.
Right now it won't let you:
# rbd expo...
02/17/2017
- 10:21 PM Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
- Hope this is good for the tracker instead of the mailing list...
We have an image that was cloned from a snapshot:... - 02:49 PM Backport #18971 (Resolved): jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- https://github.com/ceph/ceph/pull/14701
- 02:49 PM Backport #18970 (Resolved): kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- https://github.com/ceph/ceph/pull/14540
- 07:54 AM Bug #17951 (Pending Backport): AdminSocket::bind_and_listen failed after rbd-nbd mapping
- PR: https://github.com/ceph/ceph/pull/12433
02/16/2017
- 10:38 PM Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- The individual ImageReplayers are stuck in the STOPPING state, trying to stop the replay of the remote journal. Due t...
- 10:12 PM Bug #18963 (Resolved): rbd-mirror: forced failover does not function when peer is unreachable
- When a local image is force promoted to primary, the local rbd-mirror daemon should detect that the local images are ...
02/15/2017
- 11:55 PM Feature #13025: Add scatter/gather support to librbd C/C++ APIs
- *PR*: https://github.com/ceph/ceph/pull/13447
- 10:54 PM Backport #18948 (Resolved): jewel: rbd-mirror: additional test stability improvements
- https://github.com/ceph/ceph/pull/14154
- 10:54 PM Backport #18947 (Resolved): kraken: rbd-mirror: additional test stability improvements
- https://github.com/ceph/ceph/pull/14155
- 10:47 PM Backport #18556 (Resolved): jewel: Potential race when removing two-way mirroring image
- 10:47 PM Backport #18608 (Resolved): jewel: Removing a clone that fails to open its parent might leave dan...
- 02:41 PM Bug #18935 (Pending Backport): rbd-mirror: additional test stability improvements
- 12:56 AM Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
- Hello,
The ceph 11.2.0 tarball fail to build under i686 architecture when it succeeds under x86_64.
Here is my ...
02/14/2017
- 08:59 PM Bug #18935 (Fix Under Review): rbd-mirror: additional test stability improvements
- *PR*: https://github.com/ceph/ceph/pull/13421
- 08:57 PM Bug #18935 (Resolved): rbd-mirror: additional test stability improvements
- 09:02 AM Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
- Whops - I forgot that one line. It is basically the same as in the validate case.
These are the all steps to repro... - 07:34 AM Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
- how do you create vms/test-larger?
02/13/2017
- 09:32 PM Documentation #17978 (Resolved): Wrong diskcache parameter name for OpenStack Havana and Icehouse
- 08:20 PM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- I've opened a pull request: https://github.com/ceph/ceph/pull/13403
@Jason: The documentation fix doesn't apply to... - 03:12 PM Feature #18865: rbd: wipe data in disk in rbd removing
- @Yang: As I mentioned, there is no way for librbd to overwrite snapshot objects -- they are read-only from the point-...
- 06:49 AM Feature #18865: rbd: wipe data in disk in rbd removing
- Jason Dillaman wrote:
> @Yang: can you provide more background on your intended request use-case? If you are trying ... - 01:52 PM Subtask #18785 (In Progress): rbd-mirror A/A: separate ImageReplayer handling from Replayer
- 10:44 AM Feature #18917 (New): rbd: show the latest snapshot in rbd info
- When we do a snapshot rollback, we want to know what the snapshot the current head is based on.
- 07:24 AM Backport #18911 (Resolved): jewel: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
- 07:24 AM Backport #18910 (Resolved): kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped c...
- https://github.com/ceph/ceph/pull/14540
- 07:22 AM Backport #18893 (Resolved): jewel: Incomplete declaration for ContextWQ in librbd/Journal.h
- https://github.com/ceph/ceph/pull/14152
- 07:22 AM Backport #18892 (Resolved): kraken: Incomplete declaration for ContextWQ in librbd/Journal.h
- https://github.com/ceph/ceph/pull/14153
02/12/2017
- 05:45 AM Bug #18888 (Fix Under Review): rbd_clone_copy_on_read ineffective with exclusive-lock
- PR: https://github.com/ceph/ceph/pull/13196
- 05:10 AM Bug #18888 (In Progress): rbd_clone_copy_on_read ineffective with exclusive-lock
- 05:10 AM Bug #18888 (Resolved): rbd_clone_copy_on_read ineffective with exclusive-lock
- With layering+exclusive-lock feature, rbd_clone_copy_on_read does not trigger object copyups from parent image. This ...
02/11/2017
- 02:29 PM Feature #18335 (Pending Backport): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
02/10/2017
- 06:12 PM Bug #18884: systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/rbdmap
- I have a fix which uses an RBDMAP_UNMAP_ALL parameter in /etc/sysconfig/ceph to control whether all RBD images (if "y...
- 06:04 PM Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
- Copy of downstream bug report:
When stopping the service rbdmap it unmaps ALL mapped RBDs instead just unmapping t... - 02:24 PM Bug #17913: librbd io deadlock after host lost network connectivity
- @Dan van der Ster:
If you can install all necessary debug packages and get a complete gdb core backtrace via "thre... - 02:22 PM Bug #18839 (Resolved): fsx segfault on clone op
- 02:19 PM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- @Michael: note that Icehouse and Havana are both EOLed by the upstream community. Does this issue apply to Grizzly+ r...
- 10:44 AM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- @Michael: Can you open a PR at https://github.com/ceph/ceph with your proposed fix? The documentation is under doc/
- 09:58 AM Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- *Ping* Any progress on this?
- 02:17 PM Feature #18865 (Need More Info): rbd: wipe data in disk in rbd removing
- @Yang: can you provide more background on your intended request use-case? If you are trying to implement a secure del...
- 01:59 PM Bug #18862 (Pending Backport): Incomplete declaration for ContextWQ in librbd/Journal.h
- *PR*: https://github.com/ceph/ceph/pull/13322
02/09/2017
- 10:22 AM Feature #18864: rbd export/import for consistent group
- it should be a feature not a bug.
- 10:15 AM Feature #18864 (New): rbd export/import for consistent group
- 10:18 AM Feature #18865: rbd: wipe data in disk in rbd removing
- it should be a feature instead of bug.
- 10:16 AM Feature #18865 (Rejected): rbd: wipe data in disk in rbd removing
- 10:14 AM Feature #18863 (New): rbd export/import improvement.
- snap timestamp for each diff, and add a crc check for it.
- 09:25 AM Bug #18862 (Fix Under Review): Incomplete declaration for ContextWQ in librbd/Journal.h
- PR: https://github.com/ceph/ceph/pull/13322
- 08:43 AM Bug #18862 (Resolved): Incomplete declaration for ContextWQ in librbd/Journal.h
- There is an incomplete declaration for ContextWQ and we call its method in Journal<I>::MetadataListener::handle_updat...
02/08/2017
- 07:31 PM Subtask #18753 (In Progress): rbd-mirror HA: create teuthology thrasher for rbd-mirror
- 02:18 PM Subtask #18784 (In Progress): rbd-mirror A/A: leader should track up/down rbd-mirror instances
- 02:17 PM Subtask #18783 (Fix Under Review): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/f...
- PR: https://github.com/ceph/ceph/pull/13312
02/07/2017
- 12:14 PM Bug #18844 (Resolved): import-diff failed: (33) Numerical argument out of domain - if image size ...
- *Steps to setup the test case (create a basic image):*
rbd create vms/test -s 1G
rbd snap create vms/test@snap
rbd... - 03:40 AM Bug #18839: fsx segfault on clone op
- fixed by:
https://github.com/ceph/ceph/pull/13287 - 03:34 AM Bug #18839 (Resolved): fsx segfault on clone op
- exec:
./ceph_test_librbd_fsx -N 1000 rbd fsx -d
segfault:
123 write 0x2398d thru 0x2b8d9 (0x7f4d bytes)...
02/06/2017
- 02:12 PM Subtask #18783 (In Progress): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follow...
- 01:46 PM Bug #18832 (Won't Fix): "SubsystemMap.h: 62: FAILED assert(sub < m_subsys.size())" in upgrade:cli...
- Run: http://pulpito.ceph.com/teuthology-2017-02-04_11:45:02-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ... - 09:36 AM Bug #17913: librbd io deadlock after host lost network connectivity
- Hi Jason -- our security officer is hesitating to let me post the machine memory dump. Could we meet on IRC and I can...
02/05/2017
- 11:26 PM Backport #18823 (Resolved): jewel: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD....
- https://github.com/ceph/ceph/pull/14150
- 11:26 PM Backport #18822 (Resolved): kraken: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD...
- https://github.com/ceph/ceph/pull/14151
02/04/2017
- 02:13 PM Bug #17447 (Pending Backport): run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.Obje...
02/03/2017
- 11:24 PM Bug #18733 (Rejected): test_rbd.TestImage.test_block_name_prefix and test_rbd.TestImage.test_id f...
- 01:53 PM Bug #17913: librbd io deadlock after host lost network connectivity
- @Dan van der Ster:
Please use the "ceph-post-file" utility to upload the core dump along with a listing of install... - 10:48 AM Bug #17913: librbd io deadlock after host lost network connectivity
- It happened again:...
- 12:46 PM Backport #18456 (In Progress): kraken: Attempting to remove an image w/ incompatible features res...
- 12:45 PM Backport #18454 (In Progress): hammer: Attempting to remove an image w/ incompatible features res...
- 12:15 PM Backport #18776 (In Progress): kraken: Qemu crash triggered by network issues
- 12:14 PM Backport #18775 (In Progress): jewel: Qemu crash triggered by network issues
- 12:08 PM Backport #18774 (In Progress): hammer: Qemu crash triggered by network issues
- 12:03 PM Backport #14824 (Need More Info): hammer: rbd and pool quota do not go well together
02/02/2017
- 06:25 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
- But you were totally correct about object-map feature - this cuts time of removal from approx 0.4 sec to approx 0.12 ...
- 06:06 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
- @Ben: I wasn't suggesting that "--no-progress" would improve speed, I was responding to your *strong* opinions.
Fo... - 06:01 PM Bug #18768: rbd rm on empty volumes 2/3 sec per volume
- while --no-progress worked, it didn't help. And when I tried to follow your suggestion:
# rbd create --size 10G -... - 01:28 PM Backport #18556 (In Progress): jewel: Potential race when removing two-way mirroring image
- 01:15 PM Bug #16179 (Resolved): rbd-mirror: image sync object map reload logs message
- 01:15 PM Bug #18440 (Resolved): [teuthology] update "rbd/singleton/all/formatted-output.yaml" to support c...
- 01:14 PM Bug #18261 (Resolved): rbd status: json format has duplicated/overwritten key
- 01:14 PM Bug #18242 (Resolved): rbd-nbd: invalid error code for "failed to read nbd request" messages
- 01:13 PM Bug #18068 (Resolved): diff calculate can hide parent extents when examining first snapshot in clone
- 01:13 PM Bug #16176 (Resolved): objectmap does not show object existence correctly
- 01:12 PM Bug #17973 (Resolved): "FAILED assert(m_processing == 0)" while running test_lock_fence.sh
- 01:10 PM Bug #18200 (Resolved): RBD diff got SIGABRT with "--whole-object" for RBD whose parent also have ...
- 01:10 PM Cleanup #16985 (Resolved): Improve error reporting from "rbd feature enable/disable"
- 10:16 AM Feature #18335 (Fix Under Review): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
- PR: https://github.com/ceph/ceph/pull/13229
- 06:28 AM Feature #18594 (Resolved): [teuthology] integrate OpenStack 'gate-tempest-dsvm-full-devstack-plug...
- 12:03 AM Subtask #18789 (Resolved): rbd-mirror A/A: coordinate image syncs with leader
- The follower instances should send a "sync start" request to the leader before starting a full image sync. If there a...
- 12:01 AM Subtask #18788 (Resolved): rbd-mirror A/A: integrate distribution policy with proxied InstanceRep...
- The leader should map each image via the distribution policy to an up remote instance. For each remote instance, the ...
- 12:00 AM Subtask #18787 (Resolved): rbd-mirror A/A: proxy InstanceReplayer APIs via InstanceWatcher RPC
- The leader would instantiate a proxy of InstanceReplayer that invokes InstanceWatcher notification methods for the sp...
- 12:00 AM Subtask #18786 (Resolved): rbd-mirror A/A: create simple image distribution policy
- The simple distribution policy should just attempt to assign <number of images> / <number of up instances> to each rb...
- 12:00 AM Subtask #18785 (Resolved): rbd-mirror A/A: separate ImageReplayer handling from Replayer
- Create a new interface (i.e. InstanceReplayerInterface) that have API methods for acquire and releasing images by glo...
02/01/2017
- 11:59 PM Subtask #18784 (Resolved): rbd-mirror A/A: leader should track up/down rbd-mirror instances
- After acquiring the lock, the leader should read the "rbd_mirror_instances" mapping into memory. When the leader send...
- 11:59 PM Subtask #18783 (Resolved): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follower RPC
- On initialization of the pool Replayer, initialize a new InstanceWatcher that adds a record to "rbd_mirror_instances"...
- 10:19 PM Backport #18778 (Resolved): jewel: rbd --pool=x rename y z does not work
- https://github.com/ceph/ceph/pull/14148
- 10:19 PM Backport #18777 (Resolved): kraken: rbd --pool=x rename y z does not work
- https://github.com/ceph/ceph/pull/14149
- 10:19 PM Backport #18776 (Resolved): kraken: Qemu crash triggered by network issues
- https://github.com/ceph/ceph/pull/13245
- 10:19 PM Backport #18775 (Resolved): jewel: Qemu crash triggered by network issues
- https://github.com/ceph/ceph/pull/13244
- 10:19 PM Backport #18774 (Rejected): hammer: Qemu crash triggered by network issues
- https://github.com/ceph/ceph/pull/13243
- 10:19 PM Backport #18771 (Resolved): kraken: rbd: Improve compatibility between librbd + krbd for the data...
- https://github.com/ceph/ceph/pull/14539
- 10:18 PM Backport #18770 (Closed): jewel: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- 10:18 PM Backport #18769 (Resolved): kraken: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- https://github.com/ceph/ceph/pull/14147
- 09:59 PM Bug #18768 (Need More Info): rbd rm on empty volumes 2/3 sec per volume
- @Ben:
(1) "These are EMPTY volumes": while they are technically empty, when you create 10G images w/o the object ... - 09:36 PM Bug #18768 (Closed): rbd rm on empty volumes 2/3 sec per volume
- speed of "rbd rm" command slows to 2/3 second when 10000 RBD volumes are being deleted, but "rbd create" remains belo...
- 09:37 PM Cleanup #18186 (Resolved): add max_part and nbds_max options in rbd nbd map, in order to keep con...
- 09:36 PM Backport #18214 (Resolved): jewel: add max_part and nbds_max options in rbd nbd map, in order to ...
- 09:14 PM Bug #17227 (Resolved): exclusive_lock::AcquireRequest doesn't handle -ERESTART on image::RefreshR...
- 09:13 PM Backport #17340 (Resolved): jewel: exclusive_lock::AcquireRequest doesn't handle -ERESTART on ima...
- 09:11 PM Backport #18337 (Resolved): jewel: Expose librbd API methods to directly acquire and release the ...
- 08:47 PM Subtask #18767 (Closed): rbd-mirror A/A: rename Replayer to PoolReplayer
- 08:41 PM Subtask #18767 (Closed): rbd-mirror A/A: rename Replayer to PoolReplayer
- This is a better naming convention to denote that this class is responsible for handling pool-level replication.
- 08:47 PM Subtask #18766 (Closed): rbd-mirror A/A: track alive pool peers
- 08:28 PM Subtask #18766 (Closed): rbd-mirror A/A: track alive pool peers
- When the pool leader sends out its periodic heartbeat, the clients ack the message. Use the global id received in the...
- 08:42 PM Subtask #18327 (Resolved): [iscsi]: need an API to break the exclusive lock
- 08:42 PM Backport #18453 (Resolved): jewel: [iscsi]: need an API to break the exclusive lock
- 08:39 PM Backport #17261 (New): jewel: Potential seg fault when blacklisting a client
- 08:39 PM Backport #17243 (New): jewel: Deadlock in several librbd teuthology test cases
- 08:38 PM Backport #17817 (New): jewel: teuthology: upgrade:client-upgrade import_export.sh test fails
- 08:38 PM Bug #16773 (Resolved): FAILED assert(m_image_ctx.journal == nullptr)
- 08:38 PM Backport #17134 (Resolved): jewel: FAILED assert(m_image_ctx.journal == nullptr)
- 08:14 PM Feature #18765 (Resolved): rbd-mirror: add support for active/active daemon instances
- Phase 2:
See http://pad.ceph.com/p/rbd_mirror_scale
- 05:47 PM Bug #18326 (Pending Backport): rbd --pool=x rename y z does not work
- 04:42 PM Subtask #17020 (Resolved): rbd-mirror HA: pool replayer should be started/stopped when lock acqui...
- 04:41 PM Subtask #17019 (Resolved): rbd-mirror HA: create pool locker / leader class
- 04:41 PM Subtask #17018 (Resolved): rbd-mirror HA: add new lock released/acquired and heartbeat messages
- 11:50 AM Feature #18123 (Resolved): Need CLI ability to add, edit and remove omap values with binary keys
- 11:33 AM Backport #18284 (Resolved): jewel: Need CLI ability to add, edit and remove omap values with bina...
- 10:03 AM Bug #17913: librbd io deadlock after host lost network connectivity
- This happened again after a network outage yesterday (again 0.94.9 librbd):...
01/31/2017
- 10:36 PM Subtask #18753 (Resolved): rbd-mirror HA: create teuthology thrasher for rbd-mirror
- Create a set of tests (functional and stress) that can be executed while rbd-mirror processes are randomly thrashed.
- 04:16 PM Feature #18748 (Resolved): [cli] add ability to demote/promote all mirrored images in a pool
- Add async versions of promote / demote and have the rbd CLI batch promote / demote mirrored images within a pool.
... - 03:45 PM Cleanup #16991 (Resolved): rbd-mirror split-brain issues should be clearly visible in mirror status
- 03:45 PM Backport #18194 (Resolved): jewel: rbd-mirror split-brain issues should be clearly visible in mir...
- 03:45 PM Bug #18051 (Resolved): "rbd mirror image resync" does not force resync after split-brain
- 03:45 PM Backport #18191 (Resolved): jewel: "rbd mirror image resync" does not force resync after split-brain
- 03:44 PM Bug #18156 (Resolved): rbd-mirror: gmock warnings in bootstrap request unit tests
- 03:44 PM Backport #18190 (Resolved): jewel: rbd-mirror: gmock warnings in bootstrap request unit tests
- 03:44 PM Bug #18048 (Resolved): qa: rbd-mirror workunit false negative when waiting for image deleted afte...
- 03:44 PM Backport #18136 (Resolved): jewel: qa: rbd-mirror workunit false negative when waiting for image ...
- 03:42 PM Backport #18012 (Resolved): jewel: qa/workunits/rbd: improvements for rbd-mirror tests
- 03:12 PM Feature #16557 (Resolved): Update on-disk exclusive lock tag when image watcher is lost
- Fix included under tracker ticket #16773
- 02:16 PM Backport #18558 (Resolved): jewel: rbd bench-write will crash if "--io-size" is 4G
- 02:15 PM Backport #18494 (Resolved): jewel: [rbd-mirror] sporadic image replayer shut down failure
- 02:14 PM Backport #18633 (Resolved): jewel: [qa] crash in journal-enabled fsx run
- 02:10 PM Backport #18455 (Resolved): jewel: Attempting to remove an image w/ incompatible features results...
- 02:09 PM Backport #18704: jewel: Prevent librbd from blacklisting the in-use librados client
- Waiting on merge of https://github.com/ceph/ceph/pull/12890
- 02:00 PM Backport #18434 (Resolved): jewel: Improve error reporting from "rbd feature enable/disable"
- 01:59 PM Backport #18550 (Resolved): jewel: 'metadata_set' API operation should not change global config s...
- 01:58 PM Cleanup #18243 (Resolved): JournalMetadata flooding with errors when being blacklisted
- 01:58 PM Backport #18323 (Resolved): jewel: JournalMetadata flooding with errors when being blacklisted
- 01:55 PM Bug #18738 (Pending Backport): [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- 01:45 PM Backport #18703 (In Progress): kraken: Prevent librbd from blacklisting the in-use librados client
- 11:07 AM Bug #18673 (In Progress): rbd-mirror: silence -ENOENT error messages from logs
- 02:24 AM Bug #18436 (Pending Backport): Qemu crash triggered by network issues
01/30/2017
- 10:44 PM Bug #18738 (Fix Under Review): [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- *PR*: https://github.com/ceph/ceph/pull/13193
- 08:48 PM Bug #18738 (Resolved): [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- https://jenkins.ceph.com/job/ceph-pull-requests/17740/console...
- 09:48 PM Bug #18653 (Pending Backport): Improve compatibility between librbd + krbd for the data pool
- 08:10 PM Bug #18733: test_rbd.TestImage.test_block_name_prefix and test_rbd.TestImage.test_id fail in upgr...
- > Were you merging PRs into the jewel branch while this test was scheduled/queued?
Yes, quite likely. Re-running. - 06:30 PM Bug #18733 (Need More Info): test_rbd.TestImage.test_block_name_prefix and test_rbd.TestImage.tes...
- The test installed jewel at commit 20a480d as the base installation:...
- 03:21 PM Bug #18733 (Rejected): test_rbd.TestImage.test_block_name_prefix and test_rbd.TestImage.test_id f...
- Run: http://pulpito.ceph.com/smithfarm-2017-01-30_12:04:19-upgrade:jewel-x-wip-jewel-backports-distro-basic-vps/
F... - 05:09 PM Bug #18731: [teuthology] rbd-mirror tests sporadically fail due to pid file error
- The rbd-mirror daemon log includes an error such as when this occurs:...
- 02:55 PM Bug #18731 (Resolved): [teuthology] rbd-mirror tests sporadically fail due to pid file error
- Failed test description: rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures...
- 03:14 PM Subtask #17020 (Fix Under Review): rbd-mirror HA: pool replayer should be started/stopped when lo...
- PR: https://github.com/ceph/ceph/pull/12948
- 03:13 PM Subtask #17019 (Fix Under Review): rbd-mirror HA: create pool locker / leader class
- PR: https://github.com/ceph/ceph/pull/12948
- 01:05 PM Bug #18326 (Fix Under Review): rbd --pool=x rename y z does not work
- PR: https://github.com/ceph/ceph/pull/13189
- 12:02 PM Bug #18326 (In Progress): rbd --pool=x rename y z does not work
- 12:26 PM Backport #18024 (Resolved): jewel: "FAILED assert(m_processing == 0)" while running test_lock_fen...
- 12:26 PM Backport #18278 (Resolved): jewel: RBD diff got SIGABRT with "--whole-object" for RBD whose paren...
- 12:25 PM Backport #18320 (Resolved): jewel: rbd status: json format has duplicated/overwritten key
- 12:23 PM Backport #18288 (Resolved): jewel: rbd-mirror: image sync object map reload logs message
- 12:23 PM Backport #18276 (Resolved): jewel: rbd-nbd: invalid error code for "failed to read nbd request" m...
- 12:22 PM Backport #18450 (Resolved): jewel: [teuthology] update "rbd/singleton/all/formatted-output.yaml" ...
- 12:22 PM Backport #18290 (Resolved): jewel: objectmap does not show object existence correctly
- 12:20 PM Backport #18270 (Resolved): jewel: add image id block name prefix APIs
- 12:18 PM Backport #18110 (Resolved): jewel: diff calculate can hide parent extents when examining first sn...
01/29/2017
- 10:32 PM Backport #18136: jewel: qa: rbd-mirror workunit false negative when waiting for image deleted aft...
- The original PR https://github.com/ceph/ceph/pull/12321 was merged into https://github.com/ceph/ceph/pull/12425.
- 10:31 PM Backport #18012: jewel: qa/workunits/rbd: improvements for rbd-mirror tests
- The original PR https://github.com/ceph/ceph/pull/12159 was merged into https://github.com/ceph/ceph/pull/12425.
- 09:34 AM Backport #17242 (Need More Info): jewel: ImageWatcher: double unwatch of failed watch handle
- non-trivial backport
- 09:34 AM Backport #17243 (Need More Info): jewel: Deadlock in several librbd teuthology test cases
- 09:33 AM Backport #18704 (Need More Info): jewel: Prevent librbd from blacklisting the in-use librados client
- non-trivial backport
- 09:26 AM Backport #17261 (Need More Info): jewel: Potential seg fault when blacklisting a client
Also available in: Atom