Feature #49165
openceph crush class in osd service spec
0%
Description
It would be a nice feature to be able to override a crush class for ceph osd matching a certain drive_group or pattern.
Thanks!
Kenneth
Updated by Sebastian Wagner over 2 years ago
- Status changed from New to Need More Info
We have https://docs.ceph.com/en/latest/cephadm/host-management/#setting-the-initial-crush-location-of-host by now. Is this really required anymore?
Updated by Sebastian Wagner over 2 years ago
- Priority changed from Normal to Low
Updated by Kenneth Waegeman over 1 year ago
Is there also a way to set the device class of matching devices of a drive-group, so it would be possible to create custom classes in addition to the default hdd/ssd classes. This would be especially handy since nvme drives are also recognized as ssd devices
Updated by Kenneth Waegeman over 1 year ago
Seems like it should be there, but it's not recognized:
I’m trying to deploy this spec:
spec:
data_devices:
model: Dell Ent NVMe AGN MU U.2 6.4TB
rotational: 0
encrypted: true
osds_per_device: 4
crush_device_class: nvme
placement:
host_pattern: 'ceph30[1-3]'
service_id: nvme_22_drive_group
service_type: osd
But it fails:
ceph orch apply -i /etc/ceph/orch_osd.yaml --dry-run
Error EINVAL: Failed to validate OSD spec "nvme_22_drive_group": Feature `crush_device_class` is not supported
It’s in the docs https://docs.ceph.com/en/quincy/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.crush_device_class, and it’s also even in the docs of Pacific. I’m running Quincy 17.2.0
Is this option missing somehow?
Thanks!!
Kenneth