Project

General

Profile

Actions

Feature #58184

closed

cephadm DriveGroup can't handle different crush_device_classes

Added by Francesco Pantano over 1 year ago. Updated about 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Development
Tags:
backport_processed
Backport:
quincy
Reviewed:
Affected Versions:
Pull request ID:

Description

With ceph-ansible is possible to have the following OSD definition:

CephAnsibleDisksConfig:
  lvm_volumes:
    - data: '/dev/vdx'
      crush_device_class: 'ssd'
    - data: '/dev/vdz'
      crush_device_class: 'hdd'

which allows to test in the OpenStack CI both crush hierarchy and the rules associated to the defined pools.
With cephadm, --crush_device_class is a global option for ceph-volume, which prepares and activates disks in batch mode.

We should extend the DriveGroup "paths" definition within the OSDspec to allow something like:

data_devices:
  paths:
    - data: /dev/ceph_vg/ceph_lv_data
      crush_device_class: ssd
    - data: /dev/ceph_vg/ceph_lv_data2
      crush_device_class: hdd
    - data: /dev/ceph_vg/ceph_lv_data3
      crush_device_class: hdd

and make ceph-volume able to prepare single osds with an associated `crush_device_class`


Related issues 1 (0 open1 closed)

Copied to Orchestrator - Backport #58709: quincy: cephadm DriveGroup can't handle different crush_device_classesResolvedAdam KingActions
Actions #1

Updated by Adam King about 1 year ago

  • Project changed from 31 to Orchestrator
  • Status changed from New to Pending Backport
  • Backport set to quincy
  • Pull request ID set to 49555
Actions #2

Updated by Backport Bot about 1 year ago

  • Copied to Backport #58709: quincy: cephadm DriveGroup can't handle different crush_device_classes added
Actions #3

Updated by Backport Bot about 1 year ago

  • Tags set to backport_processed
Actions #4

Updated by Adam King about 1 year ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF