Activity
From 02/04/2018 to 03/05/2018
03/05/2018
- 10:29 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
- 10:28 PM Subtask #19298 (New): rbd-mirror scrub: new CLI action to request image verification
- Delayed pending the ability for the OSDs to deeply delete an object (and all associated snapshot revisions).
- 10:27 PM Cleanup #22960 (Fix Under Review): [librbd] provide plug-in object-based cache interface
- *PR*: https://github.com/ceph/ceph/pull/20682
- 10:25 PM Cleanup #22738 (Fix Under Review): [test] separate v1 format tests from v2 format tests under teu...
- *PR*: https://github.com/ceph/ceph/pull/20729
- 11:57 AM Backport #23177 (In Progress): luminous: [test] OpenStack tempest test is failing across all bran...
- https://github.com/ceph/ceph/pull/20715
03/02/2018
- 01:48 PM Bug #23184: rbd workunit return 0 response code for fail
- Works for me (and teuthology):...
- 02:46 AM Bug #23184: rbd workunit return 0 response code for fail
- Not sure why nose return 0 for assert_raises: https://github.com/ceph/ceph/blob/luminous/src/test/pybind/test_rbd.py#...
- 02:36 AM Bug #23184: rbd workunit return 0 response code for fail
- I think the exit status 0 is coming from the c++ unit test itself based on manual testing
I used existing cluster ...
03/01/2018
- 10:12 PM Bug #23184: rbd workunit return 0 response code for fail
- @Vasu: any update?
- 01:52 PM Bug #23189 (Closed): snapshot size 0 and image size 0
- Hi,everyone:
ceph - jewel 10.2.6
We use ceph as openstack storage backend, and use ceph snapshot.But recently w...
02/28/2018
- 10:58 PM Bug #23184: rbd workunit return 0 response code for fail
- Going to try manually with nosetest command and check exit status($?) to see whats wrong, you are right the script wo...
- 08:24 PM Bug #23184: rbd workunit return 0 response code for fail
- ... still don't get why this is an RBD issue. If you look here [1], you can see that the script should immediately ex...
- 08:15 PM Bug #23184: rbd workunit return 0 response code for fail
- This is the assert that is not returning non zero in case of failure, the workunit is being run on existing cluster
... - 08:13 PM Bug #23184: rbd workunit return 0 response code for fail
- Jason,
I think the original description is bit confusing, the CI test just invokes the librbd workunit after ceph-... - 07:31 PM Bug #23184 (Need More Info): rbd workunit return 0 response code for fail
- The question is where did this CI test come from? It's not an RBD test. If it's part of ceph-ansible repo, this ticke...
- 06:42 PM Bug #23184 (New): rbd workunit return 0 response code for fail
- 06:41 PM Bug #23184: rbd workunit return 0 response code for fail
- Jason,
we are trying to run some of the workunits in CI with jenkins pipeline, the workunits dont return non zero ... - 06:28 PM Bug #23184 (Need More Info): rbd workunit return 0 response code for fail
- What workunit is this in reference to? The logs indicate it has something to do with ceph-ansible, so if that's the s...
- 06:02 PM Bug #23184 (Can't reproduce): rbd workunit return 0 response code for fail
- *Expected:* rbd workunit test return non-zero response code for fail which breaks ci integration:
*Actual:* rbd wo... - 11:20 AM Backport #23177 (Resolved): luminous: [test] OpenStack tempest test is failing across all branche...
- https://github.com/ceph/ceph/pull/20715
- 06:45 AM Bug #23038 (Fix Under Review): rbd: import with option --export-format fails to protect snapshot
- 12:16 AM Bug #23038: rbd: import with option --export-format fails to protect snapshot
- *PR*: https://github.com/ceph/ceph/pull/20613
- 04:28 AM Backport #23152 (In Progress): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20628
- 03:40 AM Backport #23153 (In Progress): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20627
02/27/2018
- 07:12 PM Bug #22961 (Pending Backport): [test] OpenStack tempest test is failing across all branches (again)
- 06:43 PM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- 06:40 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Thanks, got it - seems that resolution stalled on the kernel side. I will follow up there.
- 02:05 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
- Luminous backport of follow-up fix: https://github.com/ceph/ceph/pull/20617
- 02:01 PM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- 02:01 PM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- 12:54 PM Backport #23153 (Resolved): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20627
- 12:54 PM Backport #23152 (Resolved): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20628
- 06:28 AM Feature #23126 (Fix Under Review): Ceph is not allowing deletion of any snapshots where one of th...
- 05:59 AM Feature #23126: Ceph is not allowing deletion of any snapshots where one of the snapshot of the s...
- *PR*: https://github.com/ceph/ceph/pull/20608
- 04:20 AM Bug #23068 (Pending Backport): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
02/26/2018
- 10:10 PM Bug #23143 (Fix Under Review): rbd-nbd can deadlock in logging thread
- https://github.com/ceph/ceph/pull/20681
- 10:09 PM Bug #23143 (Resolved): rbd-nbd can deadlock in logging thread
- 08:58 PM Bug #22961 (Fix Under Review): [test] OpenStack tempest test is failing across all branches (again)
- *PR*: https://github.com/ceph/ceph/pull/20599
- 04:35 PM Bug #22961 (In Progress): [test] OpenStack tempest test is failing across all branches (again)
- 08:13 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- It's all the same thing -- rbd-fuse needs to accept standard Ceph startup options. Of course, nobody uses rbd-fuse no...
- 07:53 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- Jason Dillaman wrote:
> See #12219
It does not describe anything about "-c". - 01:12 PM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- See #12219
- 10:58 AM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- $ sudo ./bin/rbd-fuse /rbd_images/ -c /home/rishabh/repos/ceph/build/ceph.conf
$ sudo umount /rbd_images
$ sudo ls ... - 07:55 PM Bug #23133: rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- Jason Dillaman wrote:
> See #12219
12219 talks only checking $CEPH_CONF. The rest of the issue (not looking in CW... - 01:12 PM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- See #12219
- 10:46 AM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- rbd-fuse fails to find ceph.conf when /etc/ceph/ceph.conf is missing. Ideally, it should look for ceph.conf in ~/.cep...
- 07:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- This is a known issue in the latest kernels and unrelated to RBD [1]
[1] https://lkml.org/lkml/2018/2/19/565 - 04:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Looks like referenced in https://www.spinics.net/lists/ceph-devel/msg40171.html
This is happening in public releas... - 03:51 PM Bug #23137 (Resolved): [upstream] rbd-nbd does not resize on Ubuntu
- rbd-nbd 12.2.3
After rbd resize, the corresponding mapped rbd-nbd device does not show correct size, unless device... - 01:11 PM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- There is a reason we added the "--allow-shrink" option so that end-user certifies it's ready to shrink. RBD would hav...
- 04:58 AM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- Execution Steps:
-----------------
1. Create a glance image ( glance is integrated with ceph) using "cirros/ubuntu"... - 01:09 PM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- You would need to configure discard for the OS/filesystem to actually release space.
- 04:23 AM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- Execution Steps:
-------------------
1. Create an provisioned image of size 20 GB
# rbd create temp/myimage20... - 04:16 AM Feature #23126 (Resolved): Ceph is not allowing deletion of any snapshots where one of the snapsh...
- Execution Steps:
------------------
1. Create an rbd image
2. Create multiple snapshots of the same image
3. Ena...
02/24/2018
- 09:53 AM Feature #22981 (Fix Under Review): [group] add 'rbd group rename" action to the CLI
- 08:57 AM Feature #22981: [group] add 'rbd group rename" action to the CLI
- *PR*: https://github.com/ceph/ceph/pull/20577
02/23/2018
- 03:25 AM Backport #23064 (In Progress): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-b...
- https://github.com/ceph/ceph/pull/20550
- 01:41 AM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- 01:36 AM Feature #23086: Implement a new rbd command "actual-size" to find the actual size of RBD images
- Jason Dillaman wrote:
> How is this different from "rbd disk-usage <image-spec>"?
Sorry, we didn't notice that rb...
02/22/2018
- 09:15 PM Feature #23086 (Need More Info): Implement a new rbd command "actual-size" to find the actual siz...
- How is this different from "rbd disk-usage <image-spec>"?
- 12:40 PM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- Recently, we find it really meaningful to find the actual size of RBD images, which would provide us the basis for th...
- 02:54 AM Backport #23065 (In Progress): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20524
02/21/2018
- 01:24 PM Bug #23068: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- PR: https://github.com/ceph/ceph/pull/20507
- 01:23 PM Bug #23068 (Resolved): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-20_16:16:39-rbd-wip-mgolub-testing-distro-basic-smithi/2207758/te...
- 11:11 AM Backport #23065 (Resolved): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-m...
- https://github.com/ceph/ceph/pull/20524
- 11:11 AM Backport #23064 (Resolved): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20550
02/20/2018
- 12:39 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 12:36 PM Bug #11502 (Pending Backport): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
02/19/2018
- 10:57 PM Bug #21956 (Resolved): [journal] possible infinite loop within journal:expire_tags class method
- 10:57 PM Bug #21628 (Resolved): compare-and-write -EILSEQ failures should be filtered when committing jour...
- 08:19 PM Bug #23043 (Fix Under Review): [test] permissions.sh should be updated to use 'profile rbd'-style...
- *PR*: https://github.com/ceph/ceph/pull/20491
- 08:15 PM Bug #23043 (In Progress): [test] permissions.sh should be updated to use 'profile rbd'-style perm...
- 07:45 PM Bug #23043 (Resolved): [test] permissions.sh should be updated to use 'profile rbd'-style permiss...
- 03:31 PM Bug #23038 (Resolved): rbd: import with option --export-format fails to protect snapshot
- Following up on this mailing list thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024707.htm...
- 03:26 PM Bug #11502 (Fix Under Review): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- *PR*: https://github.com/ceph/ceph/pull/20486
- 02:52 PM Bug #11502 (In Progress): "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
- This appears to be a race condition between the tear down of the previous test case and this test case. The previous ...
02/16/2018
- 04:46 AM Backport #23011 (In Progress): luminous: [journal] allocating a new tag after acquiring the lock ...
- https://github.com/ceph/ceph/pull/20454
02/15/2018
- 05:33 PM Bug #22740: "[ FAILED ] TestClsRbd.snapshots_namespaces" in upgrade:kraken-x-luminous-distro-ba...
- See http://pulpito.ceph.com/yuriw-2018-02-13_21:14:53-upgrade:kraken-x-luminous-distro-basic-smithi/
- 03:15 PM Backport #23012 (Resolved): jewel: [journal] allocating a new tag after acquiring the lock should...
- https://github.com/ceph/ceph/pull/21206
- 03:15 PM Backport #23011 (Resolved): luminous: [journal] allocating a new tag after acquiring the lock sho...
- https://github.com/ceph/ceph/pull/20454
- 01:09 PM Bug #22945 (Pending Backport): [journal] allocating a new tag after acquiring the lock should use...
02/14/2018
02/13/2018
- 04:24 PM Bug #22945 (Fix Under Review): [journal] allocating a new tag after acquiring the lock should use...
- *PR*: https://github.com/ceph/ceph/pull/20423
- 10:36 AM Backport #22965 (In Progress): jewel: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20418
- 09:01 AM Backport #22964 (In Progress): luminous: [rbd-mirror] infinite loop is possible when formatting t...
- https://github.com/ceph/ceph/pull/20416
02/12/2018
- 10:53 PM Bug #22362 (Fix Under Review): cluster resource agent ocf:ceph:rbd - wrong permissions
- 11:34 AM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- Follow-on master fix: https://github.com/ceph/ceph/pull/20397
- 09:39 PM Bug #22945 (In Progress): [journal] allocating a new tag after acquiring the lock should use on-d...
- 08:04 PM Bug #22979 (Fix Under Review): test_librbd_python.sh fails in upgrade test
- *PR*: https://github.com/ceph/ceph/pull/20406
- 07:07 PM Bug #22979 (In Progress): test_librbd_python.sh fails in upgrade test
- 11:34 AM Backport #22454 (In Progress): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- Reopening to backport follow-on fix https://github.com/ceph/ceph/pull/20397
02/11/2018
- 10:46 PM Feature #22981 (Resolved): [group] add 'rbd group rename" action to the CLI
- 01:15 AM Bug #22979 (Resolved): test_librbd_python.sh fails in upgrade test
- http://pulpito.ceph.com/kchai-2018-02-10_17:19:49-rados-master-distro-basic-smithi/
02/09/2018
- 02:48 PM Cleanup #22975 (New): [librbd] remove copies of configuration settings from ImageCtx
- With the new md_config_t thread-safe configuration model, metadata config overrides should just directly update the r...
- 02:02 PM Cleanup #22960 (In Progress): [librbd] provide plug-in object-based cache interface
- 12:43 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- 07:54 AM Backport #22965 (Resolved): jewel: [rbd-mirror] infinite loop is possible when formatting the sta...
- https://github.com/ceph/ceph/pull/20418
- 07:54 AM Backport #22964 (Resolved): luminous: [rbd-mirror] infinite loop is possible when formatting the ...
- https://github.com/ceph/ceph/pull/20416
- 12:19 AM Bug #22932 (Pending Backport): [rbd-mirror] infinite loop is possible when formatting the status ...
- 12:12 AM Bug #22961: [test] OpenStack tempest test is failing across all branches (again)
- http://qa-proxy.ceph.com/teuthology/jdillaman-2018-02-08_15:10:54-rbd-wip-jd-testing-distro-basic-smithi/2170821/remo...
- 12:12 AM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- Tempest was refactored and it broke the cinder.tests.tempest.api.volume.test_volume_unicode.CinderUnicodeTest test case.
- 12:02 AM Feature #22873 (Fix Under Review): [clone v2] removing an image should automatically delete snaps...
- *PR*: https://github.com/ceph/ceph/pull/20376
02/08/2018
- 11:22 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
- Remove the direct hooks to ObjectCacher and move it under an abstract librbd::cache::ObjectCache/librbd::cache::Objec...
- 04:16 PM Feature #22873 (In Progress): [clone v2] removing an image should automatically delete snapshots ...
- 11:55 AM Bug #22950: [test] cli_generic fails on deep-copy tests if v1 image format or deep-flatten disabled
- PR: https://github.com/ceph/ceph/pull/20364
02/07/2018
- 05:22 PM Bug #22950 (Resolved): [test] cli_generic fails on deep-copy tests if v1 image format or deep-fla...
- http://qa-proxy.ceph.com/teuthology/trociny-2018-02-07_16:22:13-rbd-wip-xxg-testing-distro-basic-smithi/2165475/teuth...
- 12:57 PM Bug #22945 (Resolved): [journal] allocating a new tag after acquiring the lock should use on-disk...
- Related to issue #22932
If a client crashes before persisting its commit position and recovers, it will replay the... - 11:31 AM Bug #22932 (Fix Under Review): [rbd-mirror] infinite loop is possible when formatting the status ...
- PR: https://github.com/ceph/ceph/pull/20349
- 11:30 AM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- Below is an extract from the log file for this particular case, with the comments that show how it ended up with that...
- 07:52 AM Bug #22932 (In Progress): [rbd-mirror] infinite loop is possible when formatting the status message
02/06/2018
- 07:51 PM Bug #22932: [rbd-mirror] infinite loop is possible when formatting the status message
- The tag_tid values should always be increasing, so the "while (master.tag_tid != mirror_tag_tid)" loop could really j...
- 07:38 PM Bug #22932 (Resolved): [rbd-mirror] infinite loop is possible when formatting the status message
- Per Mykola Golub:...
- 01:39 PM Cleanup #16465 (Resolved): rbd discard ret value truncated
- 01:39 PM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- 01:38 PM Bug #21966 (Resolved): class rbd.Image discard----OSError: [errno 2147483648] error discarding re...
- 01:38 PM Backport #22191 (Resolved): jewel: class rbd.Image discard----OSError: [errno 2147483648] error d...
- 01:37 PM Bug #22011 (Resolved): abort in listing mapped nbd devices when running in a container
- 01:37 PM Backport #22186 (Resolved): jewel: abort in listing mapped nbd devices when running in a container
- 01:36 PM Bug #21558 (Resolved): rbd ls -l crashes with SIGABRT
- 01:36 PM Backport #21642 (Resolved): jewel: rbd ls -l crashes with SIGABRT
- 01:33 PM Bug #21960 (Resolved): [journal] tags are not being expired if no other clients are registered
- 01:33 PM Backport #21971 (Resolved): jewel: [journal] tags are not being expired if no other clients are r...
- 01:32 PM Bug #21179 (Resolved): [rbd] image-meta list does not return all entries
- 01:32 PM Backport #21290 (Resolved): jewel: [rbd] image-meta list does not return all entries
- 01:31 PM Bug #21248 (Resolved): [cli] rename of non-existent image results in seg fault
- 01:31 PM Backport #21266 (Resolved): jewel: [cli] rename of non-existent image results in seg fault
- 01:29 PM Bug #18435 (Resolved): [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:28 PM Backport #22594 (Resolved): jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner
- 01:26 PM Bug #22461 (Resolved): [rbd-mirror] new pools might not be detected
- 01:26 PM Backport #22498 (Resolved): jewel: [rbd-mirror] new pools might not be detected
- 01:25 PM Bug #22200 (Resolved): 'rbd du' on empty pool results in output of "specified image"
- 01:25 PM Backport #22209 (Resolved): jewel: 'rbd du' on empty pool results in output of "specified image"
- 01:24 PM Bug #22131 (Resolved): [rbd-nbd] Fedora does not register resize events
- 01:24 PM Backport #22173 (Resolved): jewel: [rbd-nbd] Fedora does not register resize events
- 01:23 PM Bug #22158 (Resolved): *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd
- 01:23 PM Backport #22170 (Resolved): jewel: *** Caught signal (Segmentation fault) ** in thread thread_nam...
02/04/2018
- 03:33 AM Backport #22913 (In Progress): jewel: rbd discard ret value truncated
- 03:30 AM Backport #22913 (Resolved): jewel: rbd discard ret value truncated
- https://github.com/ceph/ceph/pull/20287
- 03:27 AM Cleanup #16465 (Pending Backport): rbd discard ret value truncated
- 02:31 AM Bug #21663 (Resolved): [qa] rbd_mirror_helpers.sh request_resync_image function saves image id to...
- 02:30 AM Backport #21691 (Resolved): jewel: [qa] rbd_mirror_helpers.sh request_resync_image function saves...
Also available in: Atom