Activity
From 01/29/2018 to 02/27/2018
02/27/2018
- 07:12 PM Bug #22961 (Pending Backport): [test] OpenStack tempest test is failing across all branches (again)
- 06:43 PM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- 06:40 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Thanks, got it - seems that resolution stalled on the kernel side. I will follow up there.
- 02:05 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
- Luminous backport of follow-up fix: https://github.com/ceph/ceph/pull/20617
- 02:01 PM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- 02:01 PM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- 12:54 PM Backport #23153 (Resolved): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20627
- 12:54 PM Backport #23152 (Resolved): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20628
- 06:28 AM Feature #23126 (Fix Under Review): Ceph is not allowing deletion of any snapshots where one of th...
- 05:59 AM Feature #23126: Ceph is not allowing deletion of any snapshots where one of the snapshot of the s...
- *PR*: https://github.com/ceph/ceph/pull/20608
- 04:20 AM Bug #23068 (Pending Backport): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
02/26/2018
- 10:10 PM Bug #23143 (Fix Under Review): rbd-nbd can deadlock in logging thread
- https://github.com/ceph/ceph/pull/20681
- 10:09 PM Bug #23143 (Resolved): rbd-nbd can deadlock in logging thread
- 08:58 PM Bug #22961 (Fix Under Review): [test] OpenStack tempest test is failing across all branches (again)
- *PR*: https://github.com/ceph/ceph/pull/20599
- 04:35 PM Bug #22961 (In Progress): [test] OpenStack tempest test is failing across all branches (again)
- 08:13 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- It's all the same thing -- rbd-fuse needs to accept standard Ceph startup options. Of course, nobody uses rbd-fuse no...
- 07:53 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- Jason Dillaman wrote:
> See #12219
It does not describe anything about "-c". - 01:12 PM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- See #12219
- 10:58 AM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- $ sudo ./bin/rbd-fuse /rbd_images/ -c /home/rishabh/repos/ceph/build/ceph.conf
$ sudo umount /rbd_images
$ sudo ls ... - 07:55 PM Bug #23133: rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- Jason Dillaman wrote:
> See #12219
12219 talks only checking $CEPH_CONF. The rest of the issue (not looking in CW... - 01:12 PM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- See #12219
- 10:46 AM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- rbd-fuse fails to find ceph.conf when /etc/ceph/ceph.conf is missing. Ideally, it should look for ceph.conf in ~/.cep...
- 07:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- This is a known issue in the latest kernels and unrelated to RBD [1]
[1] https://lkml.org/lkml/2018/2/19/565 - 04:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Looks like referenced in https://www.spinics.net/lists/ceph-devel/msg40171.html
This is happening in public releas... - 03:51 PM Bug #23137 (Resolved): [upstream] rbd-nbd does not resize on Ubuntu
- rbd-nbd 12.2.3
After rbd resize, the corresponding mapped rbd-nbd device does not show correct size, unless device... - 01:11 PM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- There is a reason we added the "--allow-shrink" option so that end-user certifies it's ready to shrink. RBD would hav...
- 04:58 AM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- Execution Steps:
-----------------
1. Create a glance image ( glance is integrated with ceph) using "cirros/ubuntu"... - 01:09 PM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- You would need to configure discard for the OS/filesystem to actually release space.
- 04:23 AM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- Execution Steps:
-------------------
1. Create an provisioned image of size 20 GB
# rbd create temp/myimage20... - 04:16 AM Feature #23126 (Resolved): Ceph is not allowing deletion of any snapshots where one of the snapsh...
- Execution Steps:
------------------
1. Create an rbd image
2. Create multiple snapshots of the same image
3. Ena...
02/24/2018
- 09:53 AM Feature #22981 (Fix Under Review): [group] add 'rbd group rename" action to the CLI
- 08:57 AM Feature #22981: [group] add 'rbd group rename" action to the CLI
- *PR*: https://github.com/ceph/ceph/pull/20577
02/23/2018
- 03:25 AM Backport #23064 (In Progress): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-b...
- https://github.com/ceph/ceph/pull/20550
- 01:41 AM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- 01:36 AM Feature #23086: Implement a new rbd command "actual-size" to find the actual size of RBD images
- Jason Dillaman wrote:
> How is this different from "rbd disk-usage <image-spec>"?
Sorry, we didn't notice that rb...
02/22/2018
- 09:15 PM Feature #23086 (Need More Info): Implement a new rbd command "actual-size" to find the actual siz...
- How is this different from "rbd disk-usage <image-spec>"?
- 12:40 PM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- Recently, we find it really meaningful to find the actual size of RBD images, which would provide us the basis for th...
- 02:54 AM Backport #23065 (In Progress): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20524
02/21/2018
- 01:24 PM Bug #23068: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- PR: https://github.com/ceph/ceph/pull/20507
- 01:23 PM Bug #23068 (Resolved): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-20_16:16:39-rbd-wip-mgolub-testing-distro-basic-smithi/2207758/te...
- 11:11 AM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
- https://github.com/ceph/ceph/pull/20524
- 11:11 AM Backport #23064 (Resolved): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20550
02/20/2018
- 12:39 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 12:36 PM Bug #11502 (Pending Backport): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
02/19/2018
- 10:57 PM Bug #21956 (Resolved): [journal] possible infinite loop within journal:expire_tags class method
- 10:57 PM Bug #21628 (Resolved): compare-and-write -EILSEQ failures should be filtered when committing jour...
- 08:19 PM Bug #23043 (Fix Under Review): [test] permissions.sh should be updated to use 'profile rbd'-style...
- *PR*: https://github.com/ceph/ceph/pull/20491
- 08:15 PM Bug #23043 (In Progress): [test] permissions.sh should be updated to use 'profile rbd'-style perm...
- 07:45 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 03:31 PM Bug #23038 (Resolved): rbd: import with option --export-format fails to protect snapshot
- Following up on this mailing list thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024707.htm...
- 03:26 PM Bug #11502 (Fix Under Review): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- *PR*: https://github.com/ceph/ceph/pull/20486
- 02:52 PM Bug #11502 (In Progress): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- This appears to be a race condition between the tear down of the previous test case and this test case. The previous ...
02/16/2018
- 04:46 AM Backport #23011 (In Progress): luminous: [journal] allocating a new tag after acquiring the lock ...
- https://github.com/ceph/ceph/pull/20454
02/15/2018
- 05:33 PM Bug #22740: "[ FAILED ] TestClsRbd.snapshots_namespaces" in upgrade:kraken-x-luminous-distro-ba...
- See http://pulpito.ceph.com/yuriw-2018-02-13_21:14:53-upgrade:kraken-x-luminous-distro-basic-smithi/
- 03:15 PM Backport #23012 (Resolved): jewel: [journal] allocating a new tag after acquiring the lock should...
- https://github.com/ceph/ceph/pull/21206
- 03:15 PM Backport #23011 (Resolved): luminous: [journal] allocating a new tag after acquiring the lock sho...
- https://github.com/ceph/ceph/pull/20454
- 01:09 PM Bug #22945 (Pending Backport): [journal] allocating a new tag after acquiring the lock should use...
02/14/2018
02/13/2018
- 04:24 PM Bug #22945 (Fix Under Review): [journal] allocating a new tag after acquiring the lock should use...
- *PR*: https://github.com/ceph/ceph/pull/20423
- 10:36 AM Backport #22965 (In Progress): jewel: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20418
- 09:01 AM Backport #22964 (In Progress): luminous: [rbd-mirror] infinite loop is possible when formatting t...
- https://github.com/ceph/ceph/pull/20416
02/12/2018
- 10:53 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
- 11:34 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- Follow-on master fix: https://github.com/ceph/ceph/pull/20397
- 09:39 PM Bug #22945 (In Progress): [journal] allocating a new tag after acquiring the lock should use on-d...
- 08:04 PM Bug #22979 (Fix Under Review): test_librbd_python.sh fails in upgrade test
- *PR*: https://github.com/ceph/ceph/pull/20406
- 07:07 PM Bug #22979 (In Progress): test_librbd_python.sh fails in upgrade test
- 11:34 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- Reopening to backport follow-on fix https://github.com/ceph/ceph/pull/20397
02/11/2018
- 10:46 PM Feature #22981 (Resolved): [group] add 'rbd group rename" action to the CLI
- 01:15 AM Bug #22979 (Resolved): test_librbd_python.sh fails in upgrade test
- http://pulpito.ceph.com/kchai-2018-02-10_17:19:49-rados-master-distro-basic-smithi/
02/09/2018
- 02:48 PM Cleanup #22975 (New): [librbd] remove copies of configuration settings from ImageCtx
- With the new md_config_t thread-safe configuration model, metadata config overrides should just directly update the r...
- 02:02 PM Cleanup #22960 (In Progress): [librbd] provide plug-in object-based cache interface
- 12:43 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- 07:54 AM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
- https://github.com/ceph/ceph/pull/20418
- 07:54 AM Backport #22964 (Resolved): luminous: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20416
- 12:19 AM Bug #22932 (Pending Backport): [rbd-mirror] infinite loop is possible when formatting the status ...
- 12:12 AM Bug #22961: [test] OpenStack tempest test is failing across all branches (again)
- http://qa-proxy.ceph.com/teuthology/jdillaman-2018-02-08_15:10:54-rbd-wip-jd-testing-distro-basic-smithi/2170821/remo...
- 12:12 AM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- Tempest was refactored and it broke the cinder.tests.tempest.api.volume.test_volume_unicode.CinderUnicodeTest test case.
- 12:02 AM Feature #22873 (Fix Under Review): [clone v2] removing an image should automatically delete snaps...
- *PR*: https://github.com/ceph/ceph/pull/20376
02/08/2018
- 11:22 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
- Remove the direct hooks to ObjectCacher and move it under an abstract librbd::cache::ObjectCache/librbd::cache::Objec...
- 04:16 PM Feature #22873 (In Progress): [clone v2] removing an image should automatically delete snapshots ...
- 11:55 AM Bug #22950: [test] cli_generic fails on deep-copy tests if v1 image format or deep-flatten disabled
- PR: https://github.com/ceph/ceph/pull/20364
02/07/2018
- 05:22 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-07_16:22:13-rbd-wip-xxg-testing-distro-basic-smithi/2165475/teuth...
- 12:57 PM Bug #22945 (Resolved): [journal] allocating a new tag after acquiring the lock should use on-disk...
- Related to issue #22932
If a client crashes before persisting its commit position and recovers, it will replay the... - 11:31 AM Bug #22932 (Fix Under Review): [rbd-mirror] infinite loop is possible when formatting the status ...
- PR: https://github.com/ceph/ceph/pull/20349
- 11:30 AM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- Below is an extract from the log file for this particular case, with the comments that show how it ended up with that...
- 07:52 AM Bug #22932 (In Progress): [rbd-mirror] infinite loop is possible when formatting the status message
02/06/2018
- 07:51 PM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- The tag_tid values should always be increasing, so the "while (master.tag_tid != mirror_tag_tid)" loop could really j...
- 07:38 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
- Per Mykola Golub:...
- 01:39 PM Cleanup #16465 (Resolved): rbd discard ret value truncated
- 01:39 PM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- 01:38 PM Bug #21966 (Resolved): class rbd.Image discard----OSError: [errno 2147483648] error discarding re...
- 01:38 PM Backport #22191 (Resolved): jewel: class rbd.Image discard----OSError: [errno 2147483648] error d...
- 01:37 PM Bug #22011 (Resolved): abort in listing mapped nbd devices when running in a container
- 01:37 PM Backport #22186 (Resolved): jewel: abort in listing mapped nbd devices when running in a container
- 01:36 PM Bug #21558 (Resolved): rbd ls -l crashes with SIGABRT
- 01:36 PM Backport #21642 (Resolved): jewel: rbd ls -l crashes with SIGABRT
- 01:33 PM Bug #21960 (Resolved): [journal] tags are not being expired if no other clients are registered
- 01:33 PM Backport #21971 (Resolved): jewel: [journal] tags are not being expired if no other clients are r...
- 01:32 PM Bug #21179 (Resolved): [rbd] image-meta list does not return all entries
- 01:32 PM Backport #21290 (Resolved): jewel: [rbd] image-meta list does not return all entries
- 01:31 PM Bug #21248 (Resolved): [cli] rename of non-existent image results in seg fault
- 01:31 PM Backport #21266 (Resolved): jewel: [cli] rename of non-existent image results in seg fault
- 01:29 PM Bug #18435 (Resolved): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:28 PM Backport #22594 (Resolved): jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:26 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
- 01:26 PM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
- 01:25 PM Bug #22200 (Resolved): 'rbd du' on empty pool results in output of "specified image"
- 01:25 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
- 01:24 PM Bug #22131 (Resolved): [rbd-nbd] Fedora does not register resize events
- 01:24 PM Backport #22173 (Resolved): jewel: [rbd-nbd] Fedora does not register resize events
- 01:23 PM Bug #22158 (Resolved): *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd
- 01:23 PM Backport #22170 (Resolved): jewel: *** Caught signal (Segmentation fault) ** in thread thread_nam...
02/04/2018
- 03:33 AM Backport #22913 (In Progress): jewel: rbd discard ret value truncated
- 03:30 AM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- https://github.com/ceph/ceph/pull/20287
- 03:27 AM Cleanup #16465 (Pending Backport): rbd discard ret value truncated
- 02:31 AM Bug #21663 (Resolved): [qa] rbd_mirror_helpers.sh request_resync_image function saves image id to...
- 02:30 AM Backport #21691 (Resolved): jewel: [qa] rbd_mirror_helpers.sh request_resync_image function saves...
02/03/2018
- 10:13 PM Cleanup #16465 (Resolved): rbd discard ret value truncated
- https://github.com/ceph/ceph/pull/9856
- 09:17 PM Backport #22191 (In Progress): jewel: class rbd.Image discard----OSError: [errno 2147483648] erro...
- 08:48 PM Backport #22186 (In Progress): jewel: abort in listing mapped nbd devices when running in a conta...
- 08:34 PM Backport #22175 (In Progress): jewel: possible deadlock in various maintenance operations
- 07:51 PM Backport #21971 (In Progress): jewel: [journal] tags are not being expired if no other clients ar...
- 07:47 PM Backport #21915 (Need More Info): jewel: [rbd-mirror] peer cluster connections should filter out ...
- @Jason - this one does not look trivial, either
- 07:46 PM Backport #21867 (Need More Info): jewel: [object map] removing a large image (~100TB) with an obj...
- another non-trivial one
- 07:45 PM Backport #21689 (Need More Info): jewel: Possible deadlock in 'list_children' when refresh is req...
- @Jason - assigning this non-trivial backport to you. Thanks!
- 07:30 PM Backport #21442 (Need More Info): jewel: [cli] mirror "getter" commands will fail if mirroring ha...
- non-trivial backport; needs rbd developer
- 07:14 PM Backport #21290 (In Progress): jewel: [rbd] image-meta list does not return all entries
- 07:13 PM Backport #21266 (In Progress): jewel: [cli] rename of non-existent image results in seg fault
02/02/2018
- 06:16 AM Backport #22857 (In Progress): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 sh...
- https://github.com/ceph/ceph/pull/20253
02/01/2018
- 05:31 PM Bug #22321 (Resolved): ceph 12.2.x Luminous: Build fails with --without-radosgw
- 05:30 PM Backport #22375 (Resolved): luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw
- 05:11 PM Bug #20789 (Resolved): Compare and write against a clone can result in failure
- 05:10 PM Backport #22198 (Resolved): luminous: Compare and write against a clone can result in failure
- 04:41 PM Backport #21914 (Resolved): luminous: [rbd-mirror] peer cluster connections should filter out com...
- 04:15 PM Bug #21391 (Resolved): [tcmu-runner] export librbd IO perf counters to mgr
- 04:15 PM Backport #22033 (Resolved): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:15 PM Backport #22169 (Resolved): luminous: *** Caught signal (Segmentation fault) ** in thread thread_...
- 04:14 PM Feature #21849 (Resolved): sparse-reads should not be used for small IO requests
- 04:13 PM Backport #21920 (Resolved): luminous: sparse-reads should not be used for small IO requests
- 04:13 PM Bug #21961 (Resolved): [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:13 PM Backport #21969 (Resolved): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:13 PM Bug #21561 (Resolved): [rbd-mirror] primary image should register in remote, non-primary image's ...
- 04:12 PM Backport #21793 (Resolved): luminous: [rbd-mirror] primary image should register in remote, non-p...
- 04:12 PM Backport #21694 (Resolved): luminous: compare-and-write -EILSEQ failures should be filtered when ...
- 04:11 PM Backport #22577 (Resolved): luminous: [test] rbd-mirror split brain test case can have a false-po...
- 04:11 PM Backport #22809 (Resolved): luminous: rbd snap create/rm takes 60s long
- 04:09 PM Bug #22791 (Resolved): [librbd] force removing snapshots cannot remove children
- 04:09 PM Backport #22806 (Resolved): luminous: [librbd] force removing snapshots cannot remove children
- 04:00 PM Backport #22174 (Resolved): luminous: possible deadlock in various maintenance operations
- 03:08 PM Cleanup #22738 (In Progress): [test] separate v1 format tests from v2 format tests under teuthology
- 02:40 PM Feature #22874 (Duplicate): [clone v2] configurable setting to move images to trash upon remove r...
- The option can support enums of "never" (default), "always", and "in-use" (image is a parent to a clone).
- 02:38 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
- 02:32 PM Bug #22872 (Resolved): "rbd trash purge --threshold" should support data pool
- Currently only the base pool is used for calculating usage.
- 10:49 AM Backport #22857 (Resolved): luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 shoul...
- https://github.com/ceph/ceph/pull/20253
- 10:01 AM Bug #22819 (Pending Backport): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_compl...
01/31/2018
- 09:54 PM Bug #22819 (Fix Under Review): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_compl...
- *PR*: https://github.com/ceph/ceph/pull/20214
- 09:27 PM Bug #22819 (In Progress): librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
- 08:07 PM Documentation #14539 (Resolved): rbd CLI man page is missing several commands
- *PR*: https://github.com/ceph/ceph/pull/19659/commits/b00047ac253d0aa3ff22f1a93300040e424d25d0
- 08:06 PM Documentation #16999 (Resolved): RBD quick start guide will fail due to default image features
- *PR*: https://github.com/ceph/ceph/pull/19659/commits/5ae3122b039c635bf06570ef4beee246b9c6fbe7
- 12:45 PM Documentation #16999: RBD quick start guide will fail due to default image features
- Changed the spacing for the given quick-rbd.rst. Kindly verify the given file.
- 06:28 PM Documentation #21763 (Resolved): [iscsi] documentation tweaks
- 06:27 PM Backport #21868 (Resolved): luminous: [iscsi] documentation tweaks
- 06:07 PM Backport #21868 (Fix Under Review): luminous: [iscsi] documentation tweaks
- 05:13 PM Backport #21868 (In Progress): luminous: [iscsi] documentation tweaks
- 05:12 PM Backport #22198 (Fix Under Review): luminous: Compare and write against a clone can result in fai...
- 04:48 PM Backport #22198 (In Progress): luminous: Compare and write against a clone can result in failure
- 04:46 PM Backport #22169 (Fix Under Review): luminous: *** Caught signal (Segmentation fault) ** in thread...
- 04:45 PM Backport #22169 (In Progress): luminous: *** Caught signal (Segmentation fault) ** in thread thre...
- 04:45 PM Backport #22033 (Fix Under Review): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:40 PM Backport #22033 (In Progress): luminous: [tcmu-runner] export librbd IO perf counters to mgr
- 04:39 PM Backport #21969 (Fix Under Review): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" e...
- 04:36 PM Backport #21969 (In Progress): luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" exception
- 04:33 PM Backport #21920 (Fix Under Review): luminous: sparse-reads should not be used for small IO requests
- 04:30 PM Backport #21920 (In Progress): luminous: sparse-reads should not be used for small IO requests
- 04:28 PM Backport #21793 (Fix Under Review): luminous: [rbd-mirror] primary image should register in remot...
- 04:26 PM Backport #21793 (In Progress): luminous: [rbd-mirror] primary image should register in remote, no...
- 04:25 PM Backport #21694 (Fix Under Review): luminous: compare-and-write -EILSEQ failures should be filter...
- 04:23 PM Backport #21694 (In Progress): luminous: compare-and-write -EILSEQ failures should be filtered wh...
- 04:22 PM Backport #22577 (Fix Under Review): luminous: [test] rbd-mirror split brain test case can have a ...
- 04:18 PM Backport #22577 (In Progress): luminous: [test] rbd-mirror split brain test case can have a false...
- 03:04 PM Bug #21771 (Resolved): [journal] possible infinite loop within journal:tag_list class method
- 03:03 PM Backport #21782 (Resolved): luminous: [journal] possible infinite loop within journal:tag_list cl...
- 03:03 PM Backport #21855 (Resolved): luminous: [object map] removing a large image (~100TB) with an object...
- 03:03 PM Backport #21968 (Resolved): luminous: [journal] possible infinite loop within journal:expire_tags...
- 03:02 PM Backport #21970 (Resolved): luminous: [journal] tags are not being expired if no other clients ar...
- 12:27 AM Backport #21970: luminous: [journal] tags are not being expired if no other clients are registered
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18840
merged - 03:01 PM Backport #22172 (Resolved): luminous: [rbd-nbd] Fedora does not register resize events
- 03:00 PM Backport #22185 (Resolved): luminous: abort in listing mapped nbd devices when running in a conta...
- 12:25 AM Backport #22185: luminous: abort in listing mapped nbd devices when running in a container
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19051
merged - 03:00 PM Feature #21936 (Resolved): [test] UpdateFeatures RPC message should be included in test_notify.py
- 02:59 PM Backport #21973 (Resolved): luminous: [test] UpdateFeatures RPC message should be included in tes...
- 12:28 AM Backport #21973: luminous: [test] UpdateFeatures RPC message should be included in test_notify.py
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18838
merged - 01:32 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- As this is an open source project, you are more than welcome to post a proposed fix for the issue yourself.
- 06:56 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- I also checked this behavior for other operations of ceph also and i found everything is working as per expected on t...
- 03:36 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- But according to me it must be fixed. As multiple client have access right to the image then the user can rename the ...
- 08:32 AM Bug #17494: memory leak in MirroringWatcher::notify_image_updated
- I look the code and find the issue is not affect Jewel.Thank you!
01/30/2018
- 01:32 PM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- Sure, not disagreeing that this is undesirable -- but since this is an arbitrary use case that doesn't affect data co...
- 05:40 AM Bug #22660: Inconsistency raised while performing multiple "image rename" in parallel.
- Actually we have checked this scenario based on the provided features of ceph.
As ceph provides parallel access to ... - 05:30 AM Backport #22810 (Need More Info): jewel: rbd snap create/rm takes 60s long
- class ceph::BitVector does not have member begin and end functions in jewel. We need to backport commit-id daa29f7d2b...
01/29/2018
- 02:57 PM Bug #22803 (Fix Under Review): [test] cli_generic sporadically fails on "rbd trash purge --thresh...
- PR: https://github.com/ceph/ceph/pull/20170
- 02:55 AM Backport #22809 (In Progress): luminous: rbd snap create/rm takes 60s long
- https://github.com/ceph/ceph/pull/20153
Also available in: Atom