Activity
From 01/25/2018 to 02/23/2018
02/23/2018
- 03:25 AM Backport #23064 (In Progress): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-b...
- https://github.com/ceph/ceph/pull/20550
- 01:41 AM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- 01:36 AM Feature #23086: Implement a new rbd command "actual-size" to find the actual size of RBD images
- Jason Dillaman wrote:
> How is this different from "rbd disk-usage <image-spec>"?
Sorry, we didn't notice that rb...
02/22/2018
- 09:15 PM Feature #23086 (Need More Info): Implement a new rbd command "actual-size" to find the actual siz...
- How is this different from "rbd disk-usage <image-spec>"?
- 12:40 PM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- Recently, we find it really meaningful to find the actual size of RBD images, which would provide us the basis for th...
- 02:54 AM Backport #23065 (In Progress): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20524
02/21/2018
- 01:24 PM Bug #23068: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- PR: https://github.com/ceph/ceph/pull/20507
- 01:23 PM Bug #23068 (Resolved): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-20_16:16:39-rbd-wip-mgolub-testing-distro-basic-smithi/2207758/te...
- 11:11 AM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
- https://github.com/ceph/ceph/pull/20524
- 11:11 AM Backport #23064 (Resolved): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20550
02/20/2018
- 12:39 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 12:36 PM Bug #11502 (Pending Backport): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
02/19/2018
- 10:57 PM Bug #21956 (Resolved): [journal] possible infinite loop within journal:expire_tags class method
- 10:57 PM Bug #21628 (Resolved): compare-and-write -EILSEQ failures should be filtered when committing jour...
- 08:19 PM Bug #23043 (Fix Under Review): [test] permissions.sh should be updated to use 'profile rbd'-style...
- *PR*: https://github.com/ceph/ceph/pull/20491
- 08:15 PM Bug #23043 (In Progress): [test] permissions.sh should be updated to use 'profile rbd'-style perm...
- 07:45 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 03:31 PM Bug #23038 (Resolved): rbd: import with option --export-format fails to protect snapshot
- Following up on this mailing list thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024707.htm...
- 03:26 PM Bug #11502 (Fix Under Review): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- *PR*: https://github.com/ceph/ceph/pull/20486
- 02:52 PM Bug #11502 (In Progress): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- This appears to be a race condition between the tear down of the previous test case and this test case. The previous ...
02/16/2018
- 04:46 AM Backport #23011 (In Progress): luminous: [journal] allocating a new tag after acquiring the lock ...
- https://github.com/ceph/ceph/pull/20454
02/15/2018
- 05:33 PM Bug #22740: "[ FAILED ] TestClsRbd.snapshots_namespaces" in upgrade:kraken-x-luminous-distro-ba...
- See http://pulpito.ceph.com/yuriw-2018-02-13_21:14:53-upgrade:kraken-x-luminous-distro-basic-smithi/
- 03:15 PM Backport #23012 (Resolved): jewel: [journal] allocating a new tag after acquiring the lock should...
- https://github.com/ceph/ceph/pull/21206
- 03:15 PM Backport #23011 (Resolved): luminous: [journal] allocating a new tag after acquiring the lock sho...
- https://github.com/ceph/ceph/pull/20454
- 01:09 PM Bug #22945 (Pending Backport): [journal] allocating a new tag after acquiring the lock should use...
02/14/2018
02/13/2018
- 04:24 PM Bug #22945 (Fix Under Review): [journal] allocating a new tag after acquiring the lock should use...
- *PR*: https://github.com/ceph/ceph/pull/20423
- 10:36 AM Backport #22965 (In Progress): jewel: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20418
- 09:01 AM Backport #22964 (In Progress): luminous: [rbd-mirror] infinite loop is possible when formatting t...
- https://github.com/ceph/ceph/pull/20416
02/12/2018
- 10:53 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
- 11:34 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- Follow-on master fix: https://github.com/ceph/ceph/pull/20397
- 09:39 PM Bug #22945 (In Progress): [journal] allocating a new tag after acquiring the lock should use on-d...
- 08:04 PM Bug #22979 (Fix Under Review): test_librbd_python.sh fails in upgrade test
- *PR*: https://github.com/ceph/ceph/pull/20406
- 07:07 PM Bug #22979 (In Progress): test_librbd_python.sh fails in upgrade test
- 11:34 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- Reopening to backport follow-on fix https://github.com/ceph/ceph/pull/20397
02/11/2018
- 10:46 PM Feature #22981 (Resolved): [group] add 'rbd group rename" action to the CLI
- 01:15 AM Bug #22979 (Resolved): test_librbd_python.sh fails in upgrade test
- http://pulpito.ceph.com/kchai-2018-02-10_17:19:49-rados-master-distro-basic-smithi/
02/09/2018
- 02:48 PM Cleanup #22975 (New): [librbd] remove copies of configuration settings from ImageCtx
- With the new md_config_t thread-safe configuration model, metadata config overrides should just directly update the r...
- 02:02 PM Cleanup #22960 (In Progress): [librbd] provide plug-in object-based cache interface
- 12:43 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- 07:54 AM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
- https://github.com/ceph/ceph/pull/20418
- 07:54 AM Backport #22964 (Resolved): luminous: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20416
- 12:19 AM Bug #22932 (Pending Backport): [rbd-mirror] infinite loop is possible when formatting the status ...
- 12:12 AM Bug #22961: [test] OpenStack tempest test is failing across all branches (again)
- http://qa-proxy.ceph.com/teuthology/jdillaman-2018-02-08_15:10:54-rbd-wip-jd-testing-distro-basic-smithi/2170821/remo...
- 12:12 AM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- Tempest was refactored and it broke the cinder.tests.tempest.api.volume.test_volume_unicode.CinderUnicodeTest test case.
- 12:02 AM Feature #22873 (Fix Under Review): [clone v2] removing an image should automatically delete snaps...
- *PR*: https://github.com/ceph/ceph/pull/20376
02/08/2018
- 11:22 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
- Remove the direct hooks to ObjectCacher and move it under an abstract librbd::cache::ObjectCache/librbd::cache::Objec...
- 04:16 PM Feature #22873 (In Progress): [clone v2] removing an image should automatically delete snapshots ...
- 11:55 AM Bug #22950: [test] cli_generic fails on deep-copy tests if v1 image format or deep-flatten disabled
- PR: https://github.com/ceph/ceph/pull/20364
02/07/2018
- 05:22 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-07_16:22:13-rbd-wip-xxg-testing-distro-basic-smithi/2165475/teuth...
- 12:57 PM Bug #22945 (Resolved): [journal] allocating a new tag after acquiring the lock should use on-disk...
- Related to issue #22932
If a client crashes before persisting its commit position and recovers, it will replay the... - 11:31 AM Bug #22932 (Fix Under Review): [rbd-mirror] infinite loop is possible when formatting the status ...
- PR: https://github.com/ceph/ceph/pull/20349
- 11:30 AM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- Below is an extract from the log file for this particular case, with the comments that show how it ended up with that...
- 07:52 AM Bug #22932 (In Progress): [rbd-mirror] infinite loop is possible when formatting the status message
02/06/2018
- 07:51 PM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- The tag_tid values should always be increasing, so the "while (master.tag_tid != mirror_tag_tid)" loop could really j...
- 07:38 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
- Per Mykola Golub:...
- 01:39 PM Cleanup #16465 (Resolved): rbd discard ret value truncated
- 01:39 PM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- 01:38 PM Bug #21966 (Resolved): class rbd.Image discard----OSError: [errno 2147483648] error discarding re...
- 01:38 PM Backport #22191 (Resolved): jewel: class rbd.Image discard----OSError: [errno 2147483648] error d...
- 01:37 PM Bug #22011 (Resolved): abort in listing mapped nbd devices when running in a container
- 01:37 PM Backport #22186 (Resolved): jewel: abort in listing mapped nbd devices when running in a container
- 01:36 PM Bug #21558 (Resolved): rbd ls -l crashes with SIGABRT
- 01:36 PM Backport #21642 (Resolved): jewel: rbd ls -l crashes with SIGABRT
- 01:33 PM Bug #21960 (Resolved): [journal] tags are not being expired if no other clients are registered
- 01:33 PM Backport #21971 (Resolved): jewel: [journal] tags are not being expired if no other clients are r...
- 01:32 PM Bug #21179 (Resolved): [rbd] image-meta list does not return all entries
- 01:32 PM Backport #21290 (Resolved): jewel: [rbd] image-meta list does not return all entries
- 01:31 PM Bug #21248 (Resolved): [cli] rename of non-existent image results in seg fault
- 01:31 PM Backport #21266 (Resolved): jewel: [cli] rename of non-existent image results in seg fault
- 01:29 PM Bug #18435 (Resolved): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:28 PM Backport #22594 (Resolved): jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:26 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
- 01:26 PM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
- 01:25 PM Bug #22200 (Resolved): 'rbd du' on empty pool results in output of "specified image"
- 01:25 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
- 01:24 PM Bug #22131 (Resolved): [rbd-nbd] Fedora does not register resize events
- 01:24 PM Backport #22173 (Resolved): jewel: [rbd-nbd] Fedora does not register resize events
- 01:23 PM Bug #22158 (Resolved): *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd
- 01:23 PM Backport #22170 (Resolved): jewel: *** Caught signal (Segmentation fault) ** in thread thread_nam...
02/04/2018
- 03:33 AM Backport #22913 (In Progress): jewel: rbd discard ret value truncated
- 03:30 AM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- https://github.com/ceph/ceph/pull/20287
- 03:27 AM Cleanup #16465 (Pending Backport): rbd discard ret value truncated
- 02:31 AM Bug #21663 (Resolved): [qa] rbd_mirror_helpers.sh request_resync_image function saves image id to...
- 02:30 AM Backport #21691 (Resolved): jewel: [qa] rbd_mirror_helpers.sh request_resync_image function saves...
02/03/2018
- 10:13 PM Cleanup #16465 (Resolved): rbd discard ret value truncated
- https://github.com/ceph/ceph/pull/9856
- 09:17 PM Backport #22191 (In Progress): jewel: class rbd.Image discard----OSError: [errno 2147483648] erro...
- 08:48 PM Backport #22186 (In Progress): jewel: abort in listing mapped nbd devices when running in a conta...
- 08:34 PM Backport #22175 (In Progress): jewel: possible deadlock in various maintenance operations
- 07:51 PM Backport #21971 (In Progress): jewel: [journal] tags are not being expired if no other clients ar...
- 07:47 PM Backport #21915 (Need More Info): jewel: [rbd-mirror] peer cluster connections should filter out ...
- @Jason - this one does not look trivial, either
- 07:46 PM Backport #21867 (Need More Info): jewel: [object map] removing a large image (~100TB) with an obj...
- another non-trivial one
- 07:45 PM Backport #21689 (Need More Info): jewel: Possible deadlock in 'list_children' when refresh is req...
- @Jason - assigning this non-trivial backport to you. Thanks!
- 07:30 PM Backport #21442 (Need More Info): jewel: [cli] mirror "getter" commands will fail if mirroring ha...
- non-trivial backport; needs rbd developer
- 07:14 PM Backport #21290 (In Progress): jewel: [rbd] image-meta list does not return all entries
- 07:13 PM Backport #21266 (In Progress): jewel: [cli] rename of non-existent image results in seg fault
02/02/2018
- 06:16 AM Backport #22857 (In Progress): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 sh...
- https://github.com/ceph/ceph/pull/20253
02/01/2018
- 05:31 PM Bug #22321 (Resolved): ceph 12.2.x Luminous: Build fails with --without-radosgw
- 05:30 PM Backport #22375 (Resolved): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- 05:11 PM Bug #20789 (Resolved): Compare and write against a clone can result in failure
- 05:10 PM Backport #22198 (Resolved): luminous: Compare and write against a clone can result in failure
- 04:41 PM Backport #21914 (Resolved): luminous: [rbd-mirror] peer cluster connections should filter out com...
- 04:15 PM Bug #21391 (Resolved): [tcmu-runner] export librbd IO perf counters to mgr
- 04:15 PM Backport #22033 (Resolved): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:15 PM Backport #22169 (Resolved): luminous: *** Caught signal (Segmentation fault) ** in thread thread_...
- 04:14 PM Feature #21849 (Resolved): sparse-reads should not be used for small IO requests
- 04:13 PM Backport #21920 (Resolved): luminous: sparse-reads should not be used for small IO requests
- 04:13 PM Bug #21961 (Resolved): [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:13 PM Backport #21969 (Resolved): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:13 PM Bug #21561 (Resolved): [rbd-mirror] primary image should register in remote, non-primary image's ...
- 04:12 PM Backport #21793 (Resolved): luminous: [rbd-mirror] primary image should register in remote, non-p...
- 04:12 PM Backport #21694 (Resolved): luminous: compare-and-write -EILSEQ failures should be filtered when ...
- 04:11 PM Backport #22577 (Resolved): luminous: [test] rbd-mirror split brain test case can have a false-po...
- 04:11 PM Backport #22809 (Resolved): luminous: rbd snap create/rm takes 60s long
- 04:09 PM Bug #22791 (Resolved): [librbd] force removing snapshots cannot remove children
- 04:09 PM Backport #22806 (Resolved): luminous: [librbd] force removing snapshots cannot remove children
- 04:00 PM Backport #22174 (Resolved): luminous: possible deadlock in various maintenance operations
- 03:08 PM Cleanup #22738 (In Progress): [test] separate v1 format tests from v2 format tests under teuthology
- 02:40 PM Feature #22874 (Duplicate): [clone v2] configurable setting to move images to trash upon remove r...
- The option can support enums of "never" (default), "always", and "in-use" (image is a parent to a clone).
- 02:38 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
- 02:32 PM Bug #22872 (Resolved): "rbd trash purge --threshold" should support data pool
- Currently only the base pool is used for calculating usage.
- 10:49 AM Backport #22857 (Resolved): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 shoul...
- https://github.com/ceph/ceph/pull/20253
- 10:01 AM Bug #22819 (Pending Backport): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_compl...
01/31/2018
- 09:54 PM Bug #22819 (Fix Under Review): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_compl...
- *PR*: https://github.com/ceph/ceph/pull/20214
- 09:27 PM Bug #22819 (In Progress): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
- 08:07 PM Documentation #14539 (Resolved): rbd CLI man page is missing several commands
- *PR*: https://github.com/ceph/ceph/pull/19659/commits/b00047ac253d0aa3ff22f1a93300040e424d25d0
- 08:06 PM Documentation #16999 (Resolved): RBD quick start guide will fail due to default image features
- *PR*: https://github.com/ceph/ceph/pull/19659/commits/5ae3122b039c635bf06570ef4beee246b9c6fbe7
- 12:45 PM Documentation #16999: RBD quick start guide will fail due to default image features
- Changed the spacing for the given quick-rbd.rst. Kindly verify the given file.
- 06:28 PM Documentation #21763 (Resolved): [iscsi] documentation tweaks
- 06:27 PM Backport #21868 (Resolved): luminous: [iscsi] documentation tweaks
- 06:07 PM Backport #21868 (Fix Under Review): luminous: [iscsi] documentation tweaks
- 05:13 PM Backport #21868 (In Progress): luminous: [iscsi] documentation tweaks
- 05:12 PM Backport #22198 (Fix Under Review): luminous: Compare and write against a clone can result in fai...
- 04:48 PM Backport #22198 (In Progress): luminous: Compare and write against a clone can result in failure
- 04:46 PM Backport #22169 (Fix Under Review): luminous: *** Caught signal (Segmentation fault) ** in thread...
- 04:45 PM Backport #22169 (In Progress): luminous: *** Caught signal (Segmentation fault) ** in thread thre...
- 04:45 PM Backport #22033 (Fix Under Review): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:40 PM Backport #22033 (In Progress): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:39 PM Backport #21969 (Fix Under Review): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" e...
- 04:36 PM Backport #21969 (In Progress): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:33 PM Backport #21920 (Fix Under Review): luminous: sparse-reads should not be used for small IO requests
- 04:30 PM Backport #21920 (In Progress): luminous: sparse-reads should not be used for small IO requests
- 04:28 PM Backport #21793 (Fix Under Review): luminous: [rbd-mirror] primary image should register in remot...
- 04:26 PM Backport #21793 (In Progress): luminous: [rbd-mirror] primary image should register in remote, no...
- 04:25 PM Backport #21694 (Fix Under Review): luminous: compare-and-write -EILSEQ failures should be filter...
- 04:23 PM Backport #21694 (In Progress): luminous: compare-and-write -EILSEQ failures should be filtered wh...
- 04:22 PM Backport #22577 (Fix Under Review): luminous: [test] rbd-mirror split brain test case can have a ...
- 04:18 PM Backport #22577 (In Progress): luminous: [test] rbd-mirror split brain test case can have a false...
- 03:04 PM Bug #21771 (Resolved): [journal] possible infinite loop within journal:tag_list class method
- 03:03 PM Backport #21782 (Resolved): luminous: [journal] possible infinite loop within journal:tag_list cl...
- 03:03 PM Backport #21855 (Resolved): luminous: [object map] removing a large image (~100TB) with an object...
- 03:03 PM Backport #21968 (Resolved): luminous: [journal] possible infinite loop within journal:expire_tags...
- 03:02 PM Backport #21970 (Resolved): luminous: [journal] tags are not being expired if no other clients ar...
- 12:27 AM Backport #21970: luminous: [journal] tags are not being expired if no other clients are registered
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18840
merged - 03:01 PM Backport #22172 (Resolved): luminous: [rbd-nbd] Fedora does not register resize events
- 03:00 PM Backport #22185 (Resolved): luminous: abort in listing mapped nbd devices when running in a conta...
- 12:25 AM Backport #22185: luminous: abort in listing mapped nbd devices when running in a container
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19051
merged - 03:00 PM Feature #21936 (Resolved): [test] UpdateFeatures RPC message should be included in test_notify.py
- 02:59 PM Backport #21973 (Resolved): luminous: [test] UpdateFeatures RPC message should be included in tes...
- 12:28 AM Backport #21973: luminous: [test] UpdateFeatures RPC message should be included in test_notify.py
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18838
merged - 01:32 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- As this is an open source project, you are more than welcome to post a proposed fix for the issue yourself.
- 06:56 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- I also checked this behavior for other operations of ceph also and i found everything is working as per expected on t...
- 03:36 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- But according to me it must be fixed. As multiple client have access right to the image then the user can rename the ...
- 08:32 AM Bug #17494: memory leak in MirroringWatcher::notify_image_updated
- I look the code and find the issue is not affect Jewel.Thank you!
01/30/2018
- 01:32 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- Sure, not disagreeing that this is undesirable -- but since this is an arbitrary use case that doesn't affect data co...
- 05:40 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- Actually we have checked this scenario based on the provided features of ceph.
As ceph provides parallel access to ... - 05:30 AM Backport #22810 (Need More Info): jewel: rbd snap create/rm takes 60s long
- class ceph::BitVector does not have member begin and end functions in jewel. We need to backport commit-id daa29f7d2b...
01/29/2018
- 02:57 PM Bug #22803 (Fix Under Review): [test] cli_generic sporadically fails on "rbd trash purge --thresh...
- PR: https://github.com/ceph/ceph/pull/20170
- 02:55 AM Backport #22809 (In Progress): luminous: rbd snap create/rm takes 60s long
- https://github.com/ceph/ceph/pull/20153
01/28/2018
- 10:58 AM Bug #17993: rbd-mirror: potential race mirroring cloned image
- Nathan Cutler wrote:
> I attempted the jewel backport at #18500 but it is beyond my abilities.
@Nathan OK,thanks.
01/27/2018
- 12:58 PM Documentation #18197: Remove any pre-OpenStack Liberty references from the RBD documentation
- ... and now it can be cleaned up through Newton.
- 06:45 AM Documentation #18197: Remove any pre-OpenStack Liberty references from the RBD documentation
- @Kallepalli: consider it assigned. If you care to work on it, please go ahead.
- 08:45 AM Bug #22819 (Resolved): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
- What I do:
1. rbd was created with jewel client on Kraken Cluster.
2. Cluster and clients now 12.12.2
3. I was e...
01/26/2018
- 06:13 PM Backport #22395 (Resolved): luminous: librbd: cannot clone all image-metas if we have more than 6...
- 06:12 PM Backport #22393 (Resolved): luminous: librbd: cannot copy all image-metas if we have more than 64...
- 06:11 PM Bug #22362 (Resolved): cluster resource agent ocf:ceph:rbd - wrong permissions
- 06:11 PM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- 06:10 PM Backport #22497 (Resolved): luminous: [rbd-mirror] new pools might not be detected
- 06:02 PM Bug #17993: rbd-mirror: potential race mirroring cloned image
- I attempted the jewel backport at #18500 but it is beyond my abilities.
- 01:15 PM Bug #22786 (Resolved): [test] OpenStack tempest test is failing across all branches
- 01:12 PM Bug #22786 (Pending Backport): [test] OpenStack tempest test is failing across all branches
- 12:03 AM Bug #22786 (Fix Under Review): [test] OpenStack tempest test is failing across all branches
- *PR*: https://github.com/ceph/ceph/pull/20124
- 01:15 PM Backport #22815 (Resolved): luminous: [test] OpenStack tempest test is failing across all branches
- 01:13 PM Backport #22815 (In Progress): luminous: [test] OpenStack tempest test is failing across all bran...
- 01:13 PM Backport #22815 (Resolved): luminous: [test] OpenStack tempest test is failing across all branches
- https://github.com/ceph/ceph/pull/20136
- 01:10 PM Backport #22806 (In Progress): luminous: [librbd] force removing snapshots cannot remove children
- 08:00 AM Backport #22806 (Resolved): luminous: [librbd] force removing snapshots cannot remove children
- https://github.com/ceph/ceph/pull/20135
- 08:00 AM Backport #22810 (Resolved): jewel: rbd snap create/rm takes 60s long
- https://github.com/ceph/ceph/pull/21220
- 08:00 AM Backport #22809 (Resolved): luminous: rbd snap create/rm takes 60s long
- https://github.com/ceph/ceph/pull/20153
- 05:45 AM Documentation #18197: Remove any pre-OpenStack Liberty references from the RBD documentation
- Can you assign this to me?
- 05:33 AM Bug #22791 (Pending Backport): [librbd] force removing snapshots cannot remove children
- 03:06 AM Bug #17494: memory leak in MirroringWatcher::notify_image_updated
- Jason Dillaman wrote:
> @liuzhong chen: Again, this is a closed (resolved) issue -- and it never affected the Jewel ... - 03:00 AM Bug #22716 (Pending Backport): rbd snap create/rm takes 60s long
- 12:58 AM Bug #22803 (Resolved): [test] cli_generic sporadically fails on "rbd trash purge --threshold 0"
- http://qa-proxy.ceph.com/teuthology/jdillaman-2018-01-25_18:52:53-rbd-wip-jd-testing-distro-basic-smithi/2110101/teut...
01/25/2018
- 05:54 PM Backport #22208 (Resolved): luminous: 'rbd du' on empty pool results in output of "specified image"
- 04:17 PM Backport #22208: luminous: 'rbd du' on empty pool results in output of "specified image"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19107
merged - 05:04 PM Bug #21711 (Resolved): [journal] image-meta set event should refresh the image after its applied ...
- 05:04 PM Backport #21788 (Resolved): luminous: [journal] image-meta set event should refresh the image aft...
- 05:02 PM Feature #21088 (Resolved): rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- 05:02 PM Backport #21700 (Resolved): luminous: rbd-mirror: Allow a different data-pool to be used on the s...
- 04:17 PM Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cl...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19305
merged - 05:01 PM Bug #22306 (Resolved): Python RBD metadata_get does not work.
- 05:01 PM Backport #22376 (Resolved): luminous: Python RBD metadata_get does not work.
- 04:16 PM Backport #22376: luminous: Python RBD metadata_get does not work.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19479
merged - 05:01 PM Bug #21535 (Resolved): [rbd-mirror] image-meta is not replicated as part of initial sync
- 05:01 PM Backport #21644 (Resolved): luminous: [rbd-mirror] image-meta is not replicated as part of initia...
- 04:15 PM Backport #21644: luminous: [rbd-mirror] image-meta is not replicated as part of initial sync
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19484
merged - 05:00 PM Backport #21641 (Resolved): luminous: rbd ls -l crashes with SIGABRT
- 04:10 PM Backport #21641: luminous: rbd ls -l crashes with SIGABRT
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19800
merged - 05:00 PM Backport #21690 (Resolved): luminous: [qa] rbd_mirror_helpers.sh request_resync_image function sa...
- 04:10 PM Backport #21690: luminous: [qa] rbd_mirror_helpers.sh request_resync_image function saves image i...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19802
merged - 04:59 PM Backport #22593 (Resolved): luminous: [ FAILED ] TestLibRBD.RenameViaLockOwner
- 04:57 PM Bug #21529 (Resolved): Image-meta should be dynamically refreshed
- 04:57 PM Backport #21646 (Resolved): luminous: Image-meta should be dynamically refreshed
- 04:16 PM Backport #21646: luminous: Image-meta should be dynamically refreshed
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19447
merged - 04:46 PM Backport #22190 (Resolved): luminous: class rbd.Image discard----OSError: [errno 2147483648] erro...
- 04:45 PM Cleanup #22036 (Resolved): [api] compare-and-write methods not properly advertised
- 04:45 PM Backport #22073 (Resolved): luminous: [api] compare-and-write methods not properly advertised
- 04:18 PM Backport #22172: luminous: [rbd-nbd] Fedora does not register resize events
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19066
merged - 04:13 PM Backport #22395: luminous: librbd: cannot clone all image-metas if we have more than 64 key/value...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19503
merged - 04:12 PM Backport #22393: luminous: librbd: cannot copy all image-metas if we have more than 64 key/value ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19504
merged - 04:12 PM Backport #22454: luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19554
merged - 04:11 PM Backport #22497: luminous: [rbd-mirror] new pools might not be detected
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19625
merged - 04:09 PM Backport #22594: jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/19855
merged
EDIT: this was not merged. Probably the lum... - 02:01 PM Documentation #22533 (Resolved): [iscsi-gw]Incorrect package version is specified
- The v1.3.0 release of tcmu-runner is available and dev-signed builds are also available [1]
[1] https://3.chacra.c... - 01:48 PM Documentation #22533: [iscsi-gw]Incorrect package version is specified
- Hi,
So we need to change the tcmu-runner-1.3.0 or newer package to tcmu-runner-1.3.0.
- 01:24 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- But why do you have multiple clients attempting to rename the same image concurrently?
- 01:18 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- When we are trying to simultaneously access the image for renaming in parallel from client side we observed this unex...
- 06:06 AM Bug #15764: rbd-mirror bootstrap fails with -EEXIST when creating local image
- @Jason Dillaman because I have to use rbd mirror of Jewel.I look for all bugfix after Jewel about rbd mirror.I find s...
- 05:23 AM Bug #17993: rbd-mirror: potential race mirroring cloned image
- I wonder this patch was signed backport to Jewel but not do it.Is there some problem to backport to jewel or somethin...
Also available in: Atom