Project

General

Profile

Feature #44556

Updated by Joshua Schmid about 4 years ago

The osd deployment in cephadm happens async in the background.

When using drivegroups, it may be not always clear what will happen since the outcome is dependent on the inventory
assembly and evaluation of a node/cluster.

I.e. a very simple drivegroup

<pre>
data_devices:
all: True
</pre>

Is probably not worth previewing. However we allow for more complex configurations where the outcome
drivegroups can get quite cumbersome and is not always clear.
It's
also possible that prone to errors.

We do have
the hardware configuration tools to give a preview of a node has changed and therefore what would happen when applying the drivegroups will take an unwanted/unexpected effect.
To prevent this we should implement feature $subject.

There is at least one thing we have
to consider in order to make this work:

* Since we’re running (not only) osd deployments continuously in the background there’s no real sense in showing
a preview since it will be applied anyways within the next few minutes.

To tackle that we should either consider recommending drivegroups that are more complex always with the "unmanaged" flag or add some similar mechanism.

*Implementation:*

ceph-volume actually provides the necessary data already

Note the --report flag
<pre>
ceph-volume lvm batch /dev/sdx, /dev/sdy --report --format json
</pre>
cluster.

where sdx is available and sdy is not

returns something like this:

<pre>
{
"changed": true,
"osds": [
{
"block.db": {},
"data": {
"human_readable_size": "24.00 GB",
"parts": 1,
"path": "/dev/sdx",
"percentage": 100.0,
"size": 24
}
}
],
"vgs": []
}
</pre>

if none of the disks are available, c-v returns an empty dict.

This data just needs feature should be designed to be collected from consumable by the nodes and aggregated in cephadm. dashboard as well

Back