Feature #1484
closedlibvirt: map rbd via kernel driver
0%
Description
this should share a common image description schema as the qemu driver. we can treat qemu+librbd as a (preferred) optimization, as long as we have an option to force use of the kernel driver or librbd (and have the latter fail on non-qemu hypervisors). Something along the lines of type = {krbd, librbd, auto}.
Updated by Sage Weil over 12 years ago
- Target version deleted (
v0.37) - Translation missing: en.field_position set to 1
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
13) - Translation missing: en.field_position set to 12
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
13) - Translation missing: en.field_position set to 12
Updated by Sage Weil over 12 years ago
- Translation missing: en.field_position deleted (
46) - Translation missing: en.field_position set to 13
Updated by Wido den Hollander almost 12 years ago
In the current design of libvirt I don't see how you could achieve this.
With my storage pool work I found out that you can only start and stop an entire storage pool inside libvirt.
This would be you would (un)map all images inside a specific pool, you can't start (map) or stop (unmap) just one image.
If you are talking about mapping an image to a virtual machine, I'm not aware of any hooks you can execute, but why would you? You can pass down /dev/rbd0 as a device to a guest, you just need some way to get it mapped, that's the hardest part.
Updated by Sage Weil almost 12 years ago
- Project changed from Ceph to rbd
- Category deleted (
libvirt)
Updated by Sage Weil almost 12 years ago
- Translation missing: en.field_position deleted (
319) - Translation missing: en.field_position set to 1326
Updated by Jason Dillaman over 7 years ago
- Status changed from New to Rejected
Not a high-priority objective for RBD