Activity
From 02/22/2018 to 03/23/2018
03/23/2018
- 01:57 PM Bug #20054: librbd memory overhead when used with KVM
- Li Yichao wrote:
> I've done 3 experiments and think the overhead is not due to rbd cache.
>
> * Experiment is do... - 01:49 PM Bug #20054: librbd memory overhead when used with KVM
- I've done 3 experiments and think the overhead is not due to rbd cache.
* Experiment is done based on the question... - 02:58 AM Feature #23445 (Resolved): Flatten operation should use object map
- If the object is known to exist in the image, the copy-up operation can be skipped for that object.
03/21/2018
- 05:30 PM Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset
- PR: https://github.com/ceph/ceph/pull/20991
- 06:03 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
03/20/2018
- 01:02 PM Backport #23423 (In Progress): luminous: librados/snap_set_diff: don't assert on empty snapset
- 12:36 PM Backport #23423 (Resolved): luminous: librados/snap_set_diff: don't assert on empty snapset
- https://github.com/ceph/ceph/pull/20991
- 11:51 AM Feature #23422 (Resolved): librados/snap_set_diff: don't assert on empty snapset
- master PR: https://github.com/ceph/ceph/pull/20648
- 05:43 AM Support #23401: rbd mirror lead to a potential risk that primary image can be remove from a remot...
- understand,thank you very much
- 05:36 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
- It's not possible since the remote rbd-mirror daemon needs to be able to (1) register with the journal and (2) create...
- 04:59 AM Backport #23407 (In Progress): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- https://github.com/ceph/ceph/pull/20967
- 04:46 AM Feature #23399 (Fix Under Review): [clone v2] add snapshot-by-id API methods and rbd CLI support
- *PR*: https://github.com/ceph/ceph/pull/20966
03/19/2018
- 04:42 PM Backport #23407 (Resolved): luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
- https://github.com/ceph/ceph/pull/20967
- 10:01 AM Feature #22787 (In Progress): [librbd] deep copy should optionally support flattening a cloned image
- 07:38 AM Support #23401 (Closed): rbd mirror lead to a potential risk that primary image can be remove fro...
- when we use rbd mirror we must get class-write authority. But if we get this authority we can remove primary rbd imag...
- 03:28 AM Feature #23399 (In Progress): [clone v2] add snapshot-by-id API methods and rbd CLI support
- 12:40 AM Feature #23399 (Resolved): [clone v2] add snapshot-by-id API methods and rbd CLI support
- A user should be able to set the snapshot by id for use w/ "rbd children". This is required to be able to the list th...
- 12:37 AM Feature #23398 (Resolved): [clone v2] auto-delete trashed snapshot upon release of last child
- The "DetachChildRequest" state machine should be updated to release the self-managed snapshot if it was the last user...
03/17/2018
03/16/2018
- 06:41 PM Feature #20762 (New): rbdmap should support other block devices
- PR 19711 was for a different issue.
- 01:00 PM Bug #23388 (Fix Under Review): [cls] rbd.group_image_list is incorrectly flagged as R/W
- *PR*: https://github.com/ceph/ceph/pull/20939
- 12:57 PM Bug #23388 (Resolved): [cls] rbd.group_image_list is incorrectly flagged as R/W
- R/W operations cannot return any data as a payload. I suspect this is the cause of the transient failures like the fo...
03/14/2018
- 10:55 PM Bug #23184: rbd workunit return 0 response code for fail
- @Vasu: what's the status here?
- 02:55 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Jason, is there a way to trigger a ceph health on a detection of slow operation? I realize this can be a logwatch ty...
03/13/2018
- 11:43 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- @Alex: it might not have been osd.4 that had any blocked ops. Hopefully "ceph health" should tell you which specific ...
- 01:22 AM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Hi Jason, all dump_blocked_ops are zero, I ran them in a script against all OSDs, maybe too long has passed?
"ops... - 09:22 AM Backport #23304 (In Progress): luminous: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20860
03/12/2018
- 07:53 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- ... and missing the log from osd.4 which was the only one mentioned in your problem description.
- 07:51 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- @Alex: can you please run "ceph daemon osd.<X> dump_blocked_ops"?
- 06:24 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- last set of OSD logs
- 06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- second set of logs, looks like tracker stops at 10
- 06:23 PM Bug #23263: Journaling feature causes cluster to have slow requests and inconsistent PG
- Hi Jason, issue impeded by https://tracker.ceph.com/issues/23205#change-108877 - OSDs are not showing anything that I...
- 06:01 PM Bug #23263 (Need More Info): Journaling feature causes cluster to have slow requests and inconsis...
- @Alex: can you please dump out the slow requests from the OSDs to see what object is causing the issue?
- 09:14 AM Backport #23305 (Resolved): jewel: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/21219
- 09:14 AM Backport #23304 (Resolved): luminous: parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20860
03/11/2018
- 01:35 AM Bug #23285 (Pending Backport): parent blocks are still seen after a whole-object discard
- 01:26 AM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- I went ahead and built a custom kernel reverting the change https://github.com/torvalds/linux/commit/639812a1ed9bf49a...
03/09/2018
- 12:12 PM Cleanup #22960 (Resolved): [librbd] provide plug-in object-based cache interface
- 09:16 AM Bug #23285 (Fix Under Review): parent blocks are still seen after a whole-object discard
- https://github.com/ceph/ceph/pull/20809
- 09:15 AM Bug #23285 (Resolved): parent blocks are still seen after a whole-object discard
03/08/2018
- 06:48 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- It's related to where the image data is stored -- which would be the bulk storage usage source for a trashed image.
- 06:46 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- Well, what's the difference between the base pool and the data pool? I did couldn't find anything that would tell me ...
- 03:20 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- Create and fill images that utilize a data pool (i.e. rbd create --size 10G --data-pool=datapool rbd/image). If you m...
- 03:13 PM Bug #22872: "rbd trash purge --threshold" should support data pool
- I look forward to try fixing this bug? Can I get a recipe to reproduce the bug?
- 01:42 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- ... and make sure you test all "rbd mirror pool XYZ" commands, not just the three listed cases.
- 12:59 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- Sorry, I've checked against master and not jewel.
- 12:46 PM Cleanup #16989: 'rbd mirror pool' commands should report error if action not executed
- Looks like it's already resolved -
$ ./bin/rbd mirror pool enable rbd pool
$ ./bin/rbd mirror pool enable rbd poo...
03/07/2018
- 08:53 PM Cleanup #22738 (Resolved): [test] separate v1 format tests from v2 format tests under teuthology
- 01:21 PM Bug #23263 (Closed): Journaling feature causes cluster to have slow requests and inconsistent PG
- First noticed this problem in our ESXi/iSCSI cluster, but now I can replicate it in lab with just Ubuntu:
1. Creat... - 12:18 PM Bug #12219: rbd-fuse should respect standard Ceph configuration overrides and search paths
- Besides, ./ceph.conf and ~/.ceph/ceph.conf are also not sought when /etc/ceph/ceph.conf is missing.
03/06/2018
03/05/2018
- 10:29 PM Feature #22873 (Resolved): [clone v2] removing an image should automatically delete snapshots in ...
- 10:28 PM Subtask #19298 (New): rbd-mirror scrub: new CLI action to request image verification
- Delayed pending the ability for the OSDs to deeply delete an object (and all associated snapshot revisions).
- 10:27 PM Cleanup #22960 (Fix Under Review): [librbd] provide plug-in object-based cache interface
- *PR*: https://github.com/ceph/ceph/pull/20682
- 10:25 PM Cleanup #22738 (Fix Under Review): [test] separate v1 format tests from v2 format tests under teu...
- *PR*: https://github.com/ceph/ceph/pull/20729
- 11:57 AM Backport #23177 (In Progress): luminous: [test] OpenStack tempest test is failing across all bran...
- https://github.com/ceph/ceph/pull/20715
03/02/2018
- 01:48 PM Bug #23184: rbd workunit return 0 response code for fail
- Works for me (and teuthology):...
- 02:46 AM Bug #23184: rbd workunit return 0 response code for fail
- Not sure why nose return 0 for assert_raises: https://github.com/ceph/ceph/blob/luminous/src/test/pybind/test_rbd.py#...
- 02:36 AM Bug #23184: rbd workunit return 0 response code for fail
- I think the exit status 0 is coming from the c++ unit test itself based on manual testing
I used existing cluster ...
03/01/2018
- 10:12 PM Bug #23184: rbd workunit return 0 response code for fail
- @Vasu: any update?
- 01:52 PM Bug #23189 (Closed): snapshot size 0 and image size 0
- Hi,everyone:
ceph - jewel 10.2.6
We use ceph as openstack storage backend, and use ceph snapshot.But recently w...
02/28/2018
- 10:58 PM Bug #23184: rbd workunit return 0 response code for fail
- Going to try manually with nosetest command and check exit status($?) to see whats wrong, you are right the script wo...
- 08:24 PM Bug #23184: rbd workunit return 0 response code for fail
- ... still don't get why this is an RBD issue. If you look here [1], you can see that the script should immediately ex...
- 08:15 PM Bug #23184: rbd workunit return 0 response code for fail
- This is the assert that is not returning non zero in case of failure, the workunit is being run on existing cluster
... - 08:13 PM Bug #23184: rbd workunit return 0 response code for fail
- Jason,
I think the original description is bit confusing, the CI test just invokes the librbd workunit after ceph-... - 07:31 PM Bug #23184 (Need More Info): rbd workunit return 0 response code for fail
- The question is where did this CI test come from? It's not an RBD test. If it's part of ceph-ansible repo, this ticke...
- 06:42 PM Bug #23184 (New): rbd workunit return 0 response code for fail
- 06:41 PM Bug #23184: rbd workunit return 0 response code for fail
- Jason,
we are trying to run some of the workunits in CI with jenkins pipeline, the workunits dont return non zero ... - 06:28 PM Bug #23184 (Need More Info): rbd workunit return 0 response code for fail
- What workunit is this in reference to? The logs indicate it has something to do with ceph-ansible, so if that's the s...
- 06:02 PM Bug #23184 (Can't reproduce): rbd workunit return 0 response code for fail
- *Expected:* rbd workunit test return non-zero response code for fail which breaks ci integration:
*Actual:* rbd wo... - 11:20 AM Backport #23177 (Resolved): luminous: [test] OpenStack tempest test is failing across all branche...
- https://github.com/ceph/ceph/pull/20715
- 06:45 AM Bug #23038 (Fix Under Review): rbd: import with option --export-format fails to protect snapshot
- 12:16 AM Bug #23038: rbd: import with option --export-format fails to protect snapshot
- *PR*: https://github.com/ceph/ceph/pull/20613
- 04:28 AM Backport #23152 (In Progress): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20628
- 03:40 AM Backport #23153 (In Progress): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20627
02/27/2018
- 07:12 PM Bug #22961 (Pending Backport): [test] OpenStack tempest test is failing across all branches (again)
- 06:43 PM Bug #22961 (Resolved): [test] OpenStack tempest test is failing across all branches (again)
- 06:40 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Thanks, got it - seems that resolution stalled on the kernel side. I will follow up there.
- 02:05 PM Bug #22362: cluster resource agent ocf:ceph:rbd - wrong permissions
- Luminous backport of follow-up fix: https://github.com/ceph/ceph/pull/20617
- 02:01 PM Bug #22362 (Pending Backport): cluster resource agent ocf:ceph:rbd - wrong permissions
- 02:01 PM Backport #22454 (Resolved): luminous: cluster resource agent ocf:ceph:rbd - wrong permissions
- 12:54 PM Backport #23153 (Resolved): jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20627
- 12:54 PM Backport #23152 (Resolved): luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
- https://github.com/ceph/ceph/pull/20628
- 06:28 AM Feature #23126 (Fix Under Review): Ceph is not allowing deletion of any snapshots where one of th...
- 05:59 AM Feature #23126: Ceph is not allowing deletion of any snapshots where one of the snapshot of the s...
- *PR*: https://github.com/ceph/ceph/pull/20608
- 04:20 AM Bug #23068 (Pending Backport): TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
02/26/2018
- 10:10 PM Bug #23143 (Fix Under Review): rbd-nbd can deadlock in logging thread
- https://github.com/ceph/ceph/pull/20681
- 10:09 PM Bug #23143 (Resolved): rbd-nbd can deadlock in logging thread
- 08:58 PM Bug #22961 (Fix Under Review): [test] OpenStack tempest test is failing across all branches (again)
- *PR*: https://github.com/ceph/ceph/pull/20599
- 04:35 PM Bug #22961 (In Progress): [test] OpenStack tempest test is failing across all branches (again)
- 08:13 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- It's all the same thing -- rbd-fuse needs to accept standard Ceph startup options. Of course, nobody uses rbd-fuse no...
- 07:53 PM Bug #23134: "-c" option of rbd-fuse does not work with relative path
- Jason Dillaman wrote:
> See #12219
It does not describe anything about "-c". - 01:12 PM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- See #12219
- 10:58 AM Bug #23134 (Duplicate): "-c" option of rbd-fuse does not work with relative path
- $ sudo ./bin/rbd-fuse /rbd_images/ -c /home/rishabh/repos/ceph/build/ceph.conf
$ sudo umount /rbd_images
$ sudo ls ... - 07:55 PM Bug #23133: rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- Jason Dillaman wrote:
> See #12219
12219 talks only checking $CEPH_CONF. The rest of the issue (not looking in CW... - 01:12 PM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- See #12219
- 10:46 AM Bug #23133 (Duplicate): rbd-fuse fails to find a ceph.conf when /etc/ceph/ceph.conf is missing
- rbd-fuse fails to find ceph.conf when /etc/ceph/ceph.conf is missing. Ideally, it should look for ceph.conf in ~/.cep...
- 07:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- This is a known issue in the latest kernels and unrelated to RBD [1]
[1] https://lkml.org/lkml/2018/2/19/565 - 04:08 PM Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu
- Looks like referenced in https://www.spinics.net/lists/ceph-devel/msg40171.html
This is happening in public releas... - 03:51 PM Bug #23137 (Resolved): [upstream] rbd-nbd does not resize on Ubuntu
- rbd-nbd 12.2.3
After rbd resize, the corresponding mapped rbd-nbd device does not show correct size, unless device... - 01:11 PM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- There is a reason we added the "--allow-shrink" option so that end-user certifies it's ready to shrink. RBD would hav...
- 04:58 AM Bug #23131 (Rejected): Ceph allows to shrink the image size without giving any warning, where the...
- Execution Steps:
-----------------
1. Create a glance image ( glance is integrated with ceph) using "cirros/ubuntu"... - 01:09 PM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- You would need to configure discard for the OS/filesystem to actually release space.
- 04:23 AM Bug #23127 (Rejected): "rbd du" command is not showing the proper used space of RBD
- Execution Steps:
-------------------
1. Create an provisioned image of size 20 GB
# rbd create temp/myimage20... - 04:16 AM Feature #23126 (Resolved): Ceph is not allowing deletion of any snapshots where one of the snapsh...
- Execution Steps:
------------------
1. Create an rbd image
2. Create multiple snapshots of the same image
3. Ena...
02/24/2018
- 09:53 AM Feature #22981 (Fix Under Review): [group] add 'rbd group rename" action to the CLI
- 08:57 AM Feature #22981: [group] add 'rbd group rename" action to the CLI
- *PR*: https://github.com/ceph/ceph/pull/20577
02/23/2018
- 03:25 AM Backport #23064 (In Progress): luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-b...
- https://github.com/ceph/ceph/pull/20550
- 01:41 AM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- 01:36 AM Feature #23086: Implement a new rbd command "actual-size" to find the actual size of RBD images
- Jason Dillaman wrote:
> How is this different from "rbd disk-usage <image-spec>"?
Sorry, we didn't notice that rb...
02/22/2018
- 09:15 PM Feature #23086 (Need More Info): Implement a new rbd command "actual-size" to find the actual siz...
- How is this different from "rbd disk-usage <image-spec>"?
- 12:40 PM Feature #23086 (Duplicate): Implement a new rbd command "actual-size" to find the actual size of ...
- Recently, we find it really meaningful to find the actual size of RBD images, which would provide us the basis for th...
- 02:54 AM Backport #23065 (In Progress): jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basi...
- https://github.com/ceph/ceph/pull/20524
Also available in: Atom