Activity
From 04/02/2017 to 05/01/2017
05/01/2017
- 10:46 PM Bug #19413 (In Progress): Cannot delete some snapshots after upgrade from jewel to kraken
- 07:47 AM Bug #18938 (Fix Under Review): Unable to build 11.2.0 under i686
- https://github.com/ceph/ceph/pull/14891
- 06:35 AM Bug #18938: Unable to build 11.2.0 under i686
- i think the reason of the link failure is that some compilation units of unittest_librbd are including the ".cc" file...
04/29/2017
- 06:13 PM Bug #18938: Unable to build 11.2.0 under i686
- https://github.com/ceph/ceph/pull/14881 : the side-product of my adventure.
- 06:06 PM Bug #18938: Unable to build 11.2.0 under i686
- i am able to reproduce this issue with gcc-6.3...
04/28/2017
- 10:01 PM Bug #19811 (Resolved): rbd-mirror replay fails on attempting to reclaim data to local site (LS) f...
- in the event that the primary pool (on the local-site) becomes disconnected and empty without being recreated (say, `...
- 04:01 PM Backport #19808 (Resolved): jewel: [test] remove hard-coded image name from TestLibRBD.Mirror
- https://github.com/ceph/ceph/pull/14663
- 04:01 PM Backport #19807 (Resolved): kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
- https://github.com/ceph/ceph/pull/16113
- 04:01 PM Backport #19805 (In Progress): jewel: RBD default features should be negotiated with the OSD
- 04:00 PM Backport #19805 (Resolved): jewel: RBD default features should be negotiated with the OSD
- https://github.com/ceph/ceph/pull/14874
- 04:00 PM Feature #17010 (Pending Backport): RBD default features should be negotiated with the OSD
- 02:27 PM Bug #19798 (Pending Backport): [test] remove hard-coded image name from TestLibRBD.Mirror
04/27/2017
- 08:24 PM Bug #19798 (Fix Under Review): [test] remove hard-coded image name from TestLibRBD.Mirror
- *PR*: https://github.com/ceph/ceph/pull/14848
- 08:22 PM Bug #19798 (Resolved): [test] remove hard-coded image name from TestLibRBD.Mirror
- This results in a test failure in the jewel branch
- 07:47 PM Bug #19287 (Resolved): [api] is_exclusive_lock_owner doesn't detect that is has been blacklisted
- 07:47 PM Backport #19468 (Resolved): jewel: [api] is_exclusive_lock_owner doesn't detect that is has been ...
- 07:05 PM Backport #19612 (Resolved): jewel: Issues with C API image metadata retrieval functions
- 07:04 PM Bug #19256 (Resolved): [api] temporarily restrict (rbd_)mirror_peer_add from adding multiple peers
- 07:04 PM Backport #19325 (Resolved): jewel: [api] temporarily restrict (rbd_)mirror_peer_add from adding m...
- 06:41 PM Bug #19128 (Resolved): rbd import needs to sanity check auto-generated image name
- 12:33 PM Backport #19794 (In Progress): kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_...
- 12:31 PM Backport #19794 (Resolved): kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_own...
- https://github.com/ceph/ceph/pull/14833
- 12:31 PM Backport #19795 (Resolved): jewel: [test] test_notify.py: assert(not image.is_exclusive_lock_owne...
- https://github.com/ceph/ceph/pull/15461
- 12:25 PM Bug #19716 (Pending Backport): [test] test_notify.py: assert(not image.is_exclusive_lock_owner())...
04/26/2017
- 09:30 PM Bug #18070 (Resolved): rbd-nbd: immediate seg fault starting the daemon
- 09:30 PM Backport #19727 (Resolved): jewel: rbd-nbd: immediate seg fault starting the daemon
- 09:29 PM Backport #18971 (Resolved): jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- 11:56 AM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- May be this would be more helpfull.
rbd snap rm oneimages-pool/one-73@snapmirror.1 --debug-rbd 50
2017-04-26 13:5... - 11:42 AM Bug #19413: Cannot delete some snapshots after upgrade from jewel to kraken
- Hi there. I've also stambled upon this issue. My ceph cluster was migrated from 10.2.5 to 11.2.0 (ceph --version
cep...
04/25/2017
- 01:43 PM Backport #19610 (Resolved): jewel: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return ...
04/24/2017
- 06:35 PM Bug #19128 (Fix Under Review): rbd import needs to sanity check auto-generated image name
- PR: https://github.com/ceph/ceph/pull/14754
- 05:49 PM Feature #19457 (Need More Info): [api] explicit refresh image command
- Need to confirm w/ Mike whether or not the current implementation of RBD metadata will encounter the issue he is conc...
- 04:25 PM Feature #19457 (New): [api] explicit refresh image command
- 03:05 PM Feature #19457 (In Progress): [api] explicit refresh image command
- 03:01 PM Bug #19716 (Fix Under Review): [test] test_notify.py: assert(not image.is_exclusive_lock_owner())...
- PR: https://github.com/ceph/ceph/pull/14751
- 02:16 PM Bug #19716 (In Progress): [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on l...
- 01:35 PM Bug #19405 (Resolved): test_mock_LeaderWatcher.cc:368: Failure Mock function called more times th...
- 06:30 AM Subtask #18789 (Fix Under Review): rbd-mirror A/A: coordinate image syncs with leader
- PR: https://github.com/ceph/ceph/pull/14745
04/23/2017
- 07:46 PM Bug #19405 (Fix Under Review): test_mock_LeaderWatcher.cc:368: Failure Mock function called more ...
- PR: https://github.com/ceph/ceph/pull/14741
- 05:02 PM Bug #19405 (In Progress): test_mock_LeaderWatcher.cc:368: Failure Mock function called more times...
04/21/2017
- 09:12 AM Bug #19650 (Closed): rbd-nbd: client reboot if ceph cluster down
- 06:34 AM Subtask #18787 (Resolved): rbd-mirror A/A: proxy InstanceReplayer APIs via InstanceWatcher RPC
- 02:42 AM Feature #19731: Add compression in librbd/librados
- There is also same requirement for encryption.
- 02:41 AM Feature #19731 (New): Add compression in librbd/librados
- Currently compression is enabled in osd and radosgw.
This feature is about adding compression function in librbd/lib...
04/20/2017
- 10:35 PM Backport #19727 (In Progress): jewel: rbd-nbd: immediate seg fault starting the daemon
- 10:34 PM Backport #19727 (Resolved): jewel: rbd-nbd: immediate seg fault starting the daemon
- https://github.com/ceph/ceph/pull/14701
- 10:31 PM Bug #18070 (Pending Backport): rbd-nbd: immediate seg fault starting the daemon
- 09:24 PM Backport #19421 (Duplicate): jewel: librbd/ExclusiveLock.cc: 457: FAILED assert(m_state == STATE_...
- 09:24 PM Backport #19423 (Resolved): jewel: librbd/ExclusiveLock.cc: 457: FAILED assert(m_state == STATE_A...
- 08:12 PM Bug #19716 (Resolved): [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on line...
- The disable feature request will have stolen the exclusive lock
http://qa-proxy.ceph.com/teuthology/smithfarm-2017... - 07:13 PM Bug #19636 (Resolved): upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2....
- 07:13 PM Backport #19659 (Resolved): kraken: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in...
- 12:53 PM Bug #19692 (Resolved): [test] test_notify.py: rbd.InvalidArgument: error updating features for im...
- 12:12 PM Bug #19692 (Pending Backport): [test] test_notify.py: rbd.InvalidArgument: error updating feature...
- 12:53 PM Backport #19711 (Resolved): jewel: [test] test_notify.py: rbd.InvalidArgument: error updating fea...
- 12:17 PM Backport #19711 (In Progress): jewel: [test] test_notify.py: rbd.InvalidArgument: error updating ...
- 12:13 PM Backport #19711 (Resolved): jewel: [test] test_notify.py: rbd.InvalidArgument: error updating fea...
- https://github.com/ceph/ceph/pull/14680
- 12:53 PM Backport #19693 (Resolved): kraken: [test] test_notify.py: rbd.InvalidArgument: error updating fe...
- 12:51 PM Backport #18501 (Resolved): kraken: rbd-mirror: potential race mirroring cloned image
- 12:51 PM Bug #18465 (Resolved): 'metadata_set' API operation should not change global config setting
- 12:51 PM Backport #18549 (Resolved): kraken: rbd: 'metadata_set' API operation should not change global co...
- 12:50 PM Bug #18422 (Resolved): rbd bench-write will crash if "--io-size" is 4G
- 12:50 PM Backport #18557 (Resolved): kraken: rbd: 'rbd bench-write' will crash if --io-size is 4G
- 12:49 PM Cleanup #18577 (Resolved): Add missing parameter feedback to 'rbd snap limit'
- 12:49 PM Backport #18601 (Resolved): kraken: rbd: Add missing parameter feedback to 'rbd snap limit'
- 12:23 PM Bug #18618 (Resolved): [qa] crash in journal-enabled fsx run
- 12:23 PM Backport #18632 (Resolved): kraken: rbd: [qa] crash in journal-enabled fsx run
- 12:20 PM Bug #18990 (Resolved): [rbd-mirror] deleting a snapshot during sync can result in read errors
- 12:20 PM Backport #19037 (Resolved): kraken: rbd-mirror: deleting a snapshot during sync can result in rea...
- 12:19 PM Backport #19324 (Resolved): kraken: rbd: [api] temporarily restrict (rbd_)mirror_peer_add from ad...
- 10:38 AM Backport #19612 (In Progress): jewel: Issues with C API image metadata retrieval functions
- 10:36 AM Backport #19610 (In Progress): jewel: [librados_test_stub] cls_cxx_map_get_XYZ methods don't retu...
- 10:36 AM Bug #19597: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value
- *master PR*: https://github.com/ceph/ceph/pull/14484
- 10:34 AM Backport #19325 (In Progress): jewel: [api] temporarily restrict (rbd_)mirror_peer_add from addin...
- 10:32 AM Backport #19228 (In Progress): jewel: Enabling mirroring for a pool wiht clones may fail
- 10:30 AM Backport #19174 (Need More Info): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- same problem as in the kraken backport
- 10:21 AM Backport #18971 (In Progress): jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- 09:29 AM Bug #15404: KVM: terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
- Unfortunately no. I don't have any ceph clusters running currently.
04/19/2017
- 06:34 PM Bug #15404 (Need More Info): KVM: terminate called after throwing an instance of 'ceph::buffer::e...
- 06:34 PM Bug #15404: KVM: terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
- @Konrad: are you still able to reproduce this issue?
- 06:30 PM Bug #15290 (Need More Info): rbd journal
- @Tianqing Li: are you still seeing this issue? 10.0.5 was a developer release of Jewel so most likely it wasn't compl...
- 02:37 PM Bug #19650: rbd-nbd: client reboot if ceph cluster down
- Hi,
issue was due to our kernel config:
kernel.hung_task_panic = 1
kernel.hung_task_timeout_secs = 300
kernel.... - 02:26 PM Backport #19693 (Need More Info): kraken: [test] test_notify.py: rbd.InvalidArgument: error updat...
- Waiting for master PR to merge
- 02:22 PM Backport #19693 (In Progress): kraken: [test] test_notify.py: rbd.InvalidArgument: error updating...
- 02:21 PM Backport #19693: kraken: [test] test_notify.py: rbd.InvalidArgument: error updating features for ...
- NOTE: do not merge before master PR
- 02:20 PM Backport #19693 (Resolved): kraken: [test] test_notify.py: rbd.InvalidArgument: error updating fe...
- https://github.com/ceph/ceph/pull/14641
- 01:28 PM Bug #19692 (Fix Under Review): [test] test_notify.py: rbd.InvalidArgument: error updating feature...
- *PR*: https://github.com/ceph/ceph/pull/14638
- 01:06 PM Bug #19692 (Resolved): [test] test_notify.py: rbd.InvalidArgument: error updating features for im...
- When the object map and deep flatten features are not enabled by default and the test attempts to disable them, it re...
- 12:40 PM Feature #13025: Add scatter/gather support to librbd C/C++ APIs
- @Stefan: since this is a new feature, we are not planning to backport it to older versions of Ceph.
- 06:58 AM Feature #13025: Add scatter/gather support to librbd C/C++ APIs
- Is there any chance to get this into jewel?
04/18/2017
- 08:09 PM Bug #18441 (Resolved): [rbd-mirror] sporadic image replayer shut down failure
- 08:09 PM Backport #18493 (Resolved): kraken: rbd-mirror: sporadic image replayer shut down failure
- 08:09 PM Bug #18419 (Resolved): Possible deadlock performing a synchronous API action while refresh in-pro...
- 08:08 PM Backport #18495 (Resolved): kraken: rbd: Possible deadlock performing a synchronous API action wh...
- 07:38 PM Backport #19659 (In Progress): kraken: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing...
- 07:37 PM Backport #19659 (Resolved): kraken: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in...
- https://github.com/ceph/ceph/pull/14620
- 05:28 PM Bug #19636 (Pending Backport): upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in krak...
- 02:09 PM Bug #19636 (Fix Under Review): upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in krak...
- *PR*: https://github.com/ceph/ceph/pull/14615
- 02:02 PM Bug #19636 (In Progress): upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11...
- 02:01 PM Bug #19636: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2.1 integrati...
- Resize payload structure no longer ABI compatible after commit d1f2c557 (already in master and kraken branches).
- 03:43 PM Bug #19650 (Need More Info): rbd-nbd: client reboot if ceph cluster down
- @François: sounds like you encountered a kernel panic -- which we don't have any control over (it isn't our code rebo...
- 12:13 PM Bug #19650 (Closed): rbd-nbd: client reboot if ceph cluster down
- Hi,
doing
rbd-nbd map rbd/block1
mount /dev/nbd0 /mnt
dd if=/data/test.tar.gz of=/mnt/test.tar.gz statu... - 01:10 PM Feature #19652 (New): Modify rbd_mirror_journal_max_fetch_bytes parameter to reflect maximum byte...
- Currently, *rbd_mirror_max_fetch_bytes* parameter indicates maximum bytes to be read from each journal data object pe...
04/17/2017
- 07:42 AM Bug #19636 (Resolved): upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2....
- test descriptions:
* upgrade:client-upgrade/hammer-client-x/rbd/{0-cluster/start.yaml 1-install/hammer-client-x.ya...
04/15/2017
- 07:59 PM Backport #19227 (In Progress): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 05:02 PM Backport #19227 (Fix Under Review): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 05:01 PM Backport #19227: kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- PR: https://github.com/ceph/ceph/pull/14577
- 10:41 AM Backport #19227 (New): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 07:59 PM Backport #18555 (In Progress): kraken: rbd: Potential race when removing two-way mirroring image
- 05:01 PM Backport #18555 (Fix Under Review): kraken: rbd: Potential race when removing two-way mirroring i...
- 04:59 PM Backport #18555: kraken: rbd: Potential race when removing two-way mirroring image
- PR: https://github.com/ceph/ceph/pull/14576
- 08:16 AM Backport #19173 (Need More Info): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-...
- ...
04/14/2017
- 10:30 PM Backport #19467 (Resolved): kraken: [api] is_exclusive_lock_owner doesn't detect that is has been...
04/13/2017
- 09:41 PM Backport #19324 (In Progress): kraken: rbd: [api] temporarily restrict (rbd_)mirror_peer_add from...
- 09:39 PM Backport #19227 (In Progress): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- 09:29 PM Backport #19173 (In Progress): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
- 09:23 PM Backport #19037 (In Progress): kraken: rbd-mirror: deleting a snapshot during sync can result in ...
- 09:20 PM Backport #18970 (In Progress): kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd map...
- 09:16 PM Backport #18910 (In Progress): kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mappe...
- 09:14 PM Backport #18771 (In Progress): kraken: rbd: Improve compatibility between librbd + krbd for the d...
- 09:06 PM Backport #18632 (In Progress): kraken: rbd: [qa] crash in journal-enabled fsx run
- 09:05 PM Backport #18601 (In Progress): kraken: rbd: Add missing parameter feedback to 'rbd snap limit'
- 09:04 PM Backport #18557 (In Progress): kraken: rbd: 'rbd bench-write' will crash if --io-size is 4G
- 09:02 PM Backport #18555 (In Progress): kraken: rbd: Potential race when removing two-way mirroring image
- 09:00 PM Backport #18549 (In Progress): kraken: rbd: 'metadata_set' API operation should not change global...
- 08:59 PM Backport #18501 (In Progress): kraken: rbd-mirror: potential race mirroring cloned image
- 08:57 PM Backport #18495 (In Progress): kraken: rbd: Possible deadlock performing a synchronous API action...
- 08:56 PM Backport #18493 (In Progress): kraken: rbd-mirror: sporadic image replayer shut down failure
- 02:32 PM Backport #19621 (Resolved): kraken: rbd-nbd: add signal handler
- https://github.com/ceph/ceph/pull/16098
- 02:31 PM Backport #19612 (Resolved): jewel: Issues with C API image metadata retrieval functions
- https://github.com/ceph/ceph/pull/14666
- 02:31 PM Backport #19611 (Resolved): kraken: Issues with C API image metadata retrieval functions
- https://github.com/ceph/ceph/pull/15612
- 02:31 PM Backport #19610 (Resolved): jewel: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return ...
- https://github.com/ceph/ceph/pull/14665
- 02:31 PM Backport #19609 (Resolved): kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return...
- https://github.com/ceph/ceph/pull/16097
04/12/2017
- 07:50 PM Feature #19451 (Resolved): [python] image metadata APIs are not available
- 07:45 PM Bug #19588 (Pending Backport): Issues with C API image metadata retrieval functions
- 08:46 AM Bug #19588 (Fix Under Review): Issues with C API image metadata retrieval functions
- PR: https://github.com/ceph/ceph/pull/14471
- 08:34 AM Bug #19588: Issues with C API image metadata retrieval functions
- Also, there is an issue with rbd_metadata_get function: on success it does not set update val_len param (which might ...
- 06:46 AM Bug #19588 (Resolved): Issues with C API image metadata retrieval functions
- A couple of issues detected when testing rbd_metadata_list API function:
- the provided parameter `vals_len` is not ... - 06:05 PM Bug #19597 (Pending Backport): [librados_test_stub] cls_cxx_map_get_XYZ methods don't return corr...
- 02:45 PM Bug #19597 (Resolved): [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value
- The cls_cxx_map_get_keys and cls_cxx_map_get_vals method should return the number of entries read upon success. Inste...
- 05:02 PM Bug #11502: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- please reopen if this happens again
- 05:00 PM Bug #11502 (Can't reproduce): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- 02:11 PM Bug #18935 (Resolved): rbd-mirror: additional test stability improvements
- 02:11 PM Backport #18947 (Resolved): kraken: rbd-mirror: additional test stability improvements
- 02:11 PM Bug #18862 (Resolved): Incomplete declaration for ContextWQ in librbd/Journal.h
- 02:11 PM Backport #18892 (Resolved): kraken: Incomplete declaration for ContextWQ in librbd/Journal.h
- 02:10 PM Bug #17447 (Resolved): run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.ObjectMapCon...
- 02:10 PM Backport #18822 (Resolved): kraken: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD...
- 02:07 PM Bug #18326 (Resolved): rbd --pool=x rename y z does not work
- 02:07 PM Backport #18777 (Resolved): kraken: rbd --pool=x rename y z does not work
- 01:48 PM Bug #18738 (Resolved): [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- 01:48 PM Backport #18769 (Resolved): kraken: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- 01:33 PM Backport #18776 (Resolved): kraken: Qemu crash triggered by network issues
- 01:32 PM Backport #18456 (Resolved): kraken: Attempting to remove an image w/ incompatible features result...
- 01:31 PM Bug #18325 (Resolved): Removing a clone that fails to open its parent might leave dangling rbd_ch...
- 01:31 PM Backport #18609 (Resolved): kraken: Removing a clone that fails to open its parent might leave da...
- 12:51 PM Feature #19349 (Pending Backport): rbd-nbd: add signal handler
- Backport should also includes these PRs:
https://github.com/ceph/ceph/pull/14223
https://github.com/ceph/ceph/pul... - 12:14 PM Backport #19468 (In Progress): jewel: [api] is_exclusive_lock_owner doesn't detect that is has be...
- 12:14 PM Backport #19467 (In Progress): kraken: [api] is_exclusive_lock_owner doesn't detect that is has b...
- 10:37 AM Backport #18948 (Resolved): jewel: rbd-mirror: additional test stability improvements
- 10:37 AM Backport #18893 (Resolved): jewel: Incomplete declaration for ContextWQ in librbd/Journal.h
- 10:36 AM Backport #18823 (Resolved): jewel: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD....
- 10:36 AM Backport #18778 (Resolved): jewel: rbd --pool=x rename y z does not work
- 10:33 AM Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
- 10:33 AM Backport #19357 (Resolved): jewel: systemctl stop rbdmap unmaps all rbds and not just the ones in...
- 10:28 AM Backport #18911 (Resolved): jewel: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
- 10:23 AM Bug #18244 (Resolved): librbd::ResizeRequest: failed to update image header: (16) Device or resou...
- 10:22 AM Backport #18321 (Resolved): jewel: librbd::ResizeRequest: failed to update image header: (16) Dev...
- 09:59 AM Bug #19405: test_mock_LeaderWatcher.cc:368: Failure Mock function called more times than expected
- https://jenkins.ceph.com/job/ceph-pull-requests/21914/consoleFull#-7413115763f609609-74b6-4c11-b0c1-dab0624390b5
- 09:00 AM Backport #18775 (Resolved): jewel: Qemu crash triggered by network issues
- 08:59 AM Backport #18496 (Resolved): jewel: Possible deadlock performing a synchronous API action while re...
- 08:54 AM Bug #18617 (Resolved): [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade-kraken-distr...
- 08:53 AM Backport #18669 (Resolved): jewel: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade-...
- 01:00 AM Subtask #19298 (In Progress): rbd-mirror scrub: new CLI action to request image verification
- 12:46 AM Feature #18481 (Resolved): Delayed image deletion
- 12:46 AM Feature #18865 (Rejected): rbd: wipe data in disk in rbd removing
04/11/2017
- 08:48 PM Feature #19451 (Fix Under Review): [python] image metadata APIs are not available
- PR: https://github.com/ceph/ceph/pull/14463
- 09:25 AM Feature #19451 (In Progress): [python] image metadata APIs are not available
- 12:18 PM Bug #19567 (Duplicate): TestLibRBD.UpdateFeatures fails in jewel 10.2.8 integration testing
- Duplicate of #19080 -- expected failure when upgrading from Infernalis (since we are not releasing new versions of In...
- 10:31 AM Bug #17913: librbd io deadlock after host lost network connectivity
- I think I found the culprit and I also think this may have been completely independent from Ceph. In our case, I foun...
- 01:08 AM Fix #18511 (Resolved): rbd_discard return should be ssize_t instead of int
- The PR is merged.
>>> dillaman merged commit d366311 into ceph:master
04/10/2017
- 08:41 PM Bug #19567: TestLibRBD.UpdateFeatures fails in jewel 10.2.8 integration testing
- @Jason - could you take a look?
- 08:41 PM Bug #19567: TestLibRBD.UpdateFeatures fails in jewel 10.2.8 integration testing
- http://pulpito.ceph.com/smithfarm-2017-04-10_19:45:29-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/100...
- 03:11 PM Bug #19567 (Duplicate): TestLibRBD.UpdateFeatures fails in jewel 10.2.8 integration testing
- Test: upgrade:client-upgrade/infernalis-client-x/basic/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-wor...
- 08:29 PM Bug #19570 (Won't Fix): hammer: incorrect diffs for truncated objects in a cloned image
- Create a 4MB, non-sparse parent image and clone it to a new 4MB child image. Take a snapshot, shrink the image down t...
- 02:18 PM Feature #18748 (Resolved): [cli] add ability to demote/promote all mirrored images in a pool
04/09/2017
04/08/2017
- 10:40 AM Bug #18982: How to get out of weird situation after rbd flatten?
- We managed to work around this issue by manually editing the rbd metadata objects. You can close this if you like.
04/06/2017
04/04/2017
- 06:03 PM Bug #17913: librbd io deadlock after host lost network connectivity
- Ah, that was maybe a confusion. My instance of this error only had the first message:...
- 04:59 PM Bug #17913: librbd io deadlock after host lost network connectivity
- @Christian: I just think you need to recreate and get a fresh set of backtraces. Since you had the log message of "he...
- 11:58 AM Bug #17913: librbd io deadlock after host lost network connectivity
- Hey, @jdillaman, on IRC you mentioned no threads were stuck, but we lost touch. Is your indication that this isn't a ...
- 01:11 PM Bug #18738: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
- Removed the "copied to" link so the automated scripting doesn't complain about it.
- 12:42 PM Backport #19468 (Resolved): jewel: [api] is_exclusive_lock_owner doesn't detect that is has been ...
- https://github.com/ceph/ceph/pull/14481
- 12:42 PM Backport #19467 (Resolved): kraken: [api] is_exclusive_lock_owner doesn't detect that is has been...
- https://github.com/ceph/ceph/pull/14480
04/03/2017
- 08:50 PM Feature #19457 (Closed): [api] explicit refresh image command
- For several iSCSI commands (report, pgrs, etc), there is the possibility that the receiver of the request could be ou...
- 02:08 PM Feature #19451 (Resolved): [python] image metadata APIs are not available
- Add the metadata_get/set/remove/list methods to the Python RBD API.
Also available in: Atom