Project

General

Profile

Feature #58184

cephadm DriveGroup can't handle different crush_device_classes

Added by Francesco Pantano about 2 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

With ceph-ansible is possible to have the following OSD definition:

CephAnsibleDisksConfig:
  lvm_volumes:
    - data: '/dev/vdx'
      crush_device_class: 'ssd'
    - data: '/dev/vdz'
      crush_device_class: 'hdd'

which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools.
With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode.

We should extend the DriveGroup "paths" definition within the OSDspec to allow something like:

data_devices:
  paths:
    - data: /dev/ceph_vg/ceph_lv_data
      crush_device_class: ssd
    - data: /dev/ceph_vg/ceph_lv_data2
      crush_device_class: hdd
    - data: /dev/ceph_vg/ceph_lv_data3
      crush_device_class: hdd

and make ceph-volume able to prepare single osds with an associated `crush_device_class`

Also available in: Atom PDF