Actions
Feature #58184
closedcephadm DriveGroup can't handle different crush_device_classes
% Done:
0%
Source:
Development
Tags:
backport_processed
Backport:
quincy
Description
With ceph-ansible is possible to have the following OSD definition:
CephAnsibleDisksConfig:
lvm_volumes:
- data: '/dev/vdx'
crush_device_class: 'ssd'
- data: '/dev/vdz'
crush_device_class: 'hdd'
which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools.
With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode.
We should extend the DriveGroup "paths" definition within the OSDspec to allow something like:
data_devices:
paths:
- data: /dev/ceph_vg/ceph_lv_data
crush_device_class: ssd
- data: /dev/ceph_vg/ceph_lv_data2
crush_device_class: hdd
- data: /dev/ceph_vg/ceph_lv_data3
crush_device_class: hdd
and make ceph-volume able to prepare single osds with an associated `crush_device_class`
Updated by Adam King about 1 year ago
- Project changed from 31 to Orchestrator
- Status changed from New to Pending Backport
- Backport set to quincy
- Pull request ID set to 49555
Updated by Backport Bot about 1 year ago
- Copied to Backport #58709: quincy: cephadm DriveGroup can't handle different crush_device_classes added
Updated by Adam King about 1 year ago
- Status changed from Pending Backport to Resolved
Actions