Project

General

Profile

Actions

Feature #44556

closed

cephadm: preview drivegroups

Added by Joshua Schmid about 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
cephadm
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
octopus
Reviewed:
Affected Versions:
Pull request ID:

Description

The osd deployment in cephadm happens async in the background.

When using drivegroups, it may be not always clear what will happen since the outcome is dependent on the inventory of a node/cluster.

I.e. a very simple drivegroup

data_devices:
  all: True

Is probably not worth previewing. However we allow for more complex configurations where the outcome is not always clear.
It's also possible that the hardware configuration of a node has changed and therefore the drivegroups will take an unwanted/unexpected effect.
To prevent this we should implement feature $subject.

There is at least one thing we have to consider in order to make this work:

  • Since we’re running (not only) osd deployments continuously in the background there’s no real sense in showing a preview since it will be applied anyways within the next few minutes.

To tackle that we should either consider recommending drivegroups that are more complex always with the "unmanaged" flag or add some similar mechanism.

Implementation:

ceph-volume actually provides the necessary data already

Note the --report flag

ceph-volume lvm batch /dev/sdx, /dev/sdy --report --format json

where sdx is available and sdy is not

returns something like this:

{
    "changed": true,
    "osds": [
        {
            "block.db": {},
            "data": {
                "human_readable_size": "24.00 GB",
                "parts": 1,
                "path": "/dev/sdx",
                "percentage": 100.0,
                "size": 24
            }
        }
    ],
    "vgs": []
}

if none of the disks are available, c-v returns an empty dict.

This data just needs to be collected from the nodes and aggregated in cephadm.

CLI:

there are multiple options - Here's a list of commands that came to my mind

  • `ceph orch drivegroup preview`
  • `ceph orch osd drivegroup preview`
  • `ceph orch osd apply <-i|--all-available-devices> --preview`

The last command has the advantage that you can adjust your drivegroup, save it to the mon_store and get direct feedback.

Edit: However, this is an intrusion in the apply mechanism, which I'd like to keep as independent as possible.
Currently I'm in favor of #)1 oder #)2


Related issues 2 (2 open0 closed)

Related to Dashboard - Bug #44808: mgr/dashboard: Allow users to specify an unmanaged ServiceSpec when creating OSDsNew

Actions
Related to Dashboard - Feature #42453: mgr/dashboard: Allow previewing OSDs in Create OSD fromFix Under ReviewKiefer Chang

Actions
Actions

Also available in: Atom PDF