Project

General

Profile

Bug #46558

cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec

Added by Dimitri Savineau 10 months ago. Updated 2 months ago.

Status:
Pending Backport
Priority:
High
Category:
cephadm/osd
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the paths drive group attribute [1] because it seems currently ignored but works for data_devices.

# cat osds.yml
service_type: osd
service_id: foo
placement:
  label: osds
data_devices:
  paths:
    - /dev/sdc
db_devices:
  paths:
    - /dev/sdd
encrypted: false
# cephadm shell -m osds.yml -- ceph orch apply osd -i /mnt/osds.yml
INFO:cephadm:Inferring fsid b6dc7042-c6ac-11ea-974b-fa163e8d447e
INFO:cephadm:Inferring config /var/lib/ceph/b6dc7042-c6ac-11ea-974b-fa163e8d447e/mon.ofgnapinv-1/config
INFO:cephadm:Using recent ceph image docker.io/ceph/daemon-base:latest-master
WARNING: The same type, major and minor should not be used for multiple devices.
Scheduled osd.foo update...

As a result, there's an OSD created on /dev/sdc only without the bluestore DB on /dev/sdd

# cephadm shell -- ceph-volume lvm list
INFO:cephadm:Inferring fsid b6dc7042-c6ac-11ea-974b-fa163e8d447e
INFO:cephadm:Inferring config /var/lib/ceph/b6dc7042-c6ac-11ea-974b-fa163e8d447e/mon.ofgnapinv-1/config
INFO:cephadm:Using recent ceph image docker.io/ceph/daemon-base:latest-master
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.

====== osd.5 =======

  [block]       /dev/ceph-bd41b3ba-23a0-4e03-bb1c-5e17f89136a1/osd-block-fde2284b-1f59-48ab-995f-706a3e49c6bf

      block device              /dev/ceph-bd41b3ba-23a0-4e03-bb1c-5e17f89136a1/osd-block-fde2284b-1f59-48ab-995f-706a3e49c6bf
      block uuid                avv5vs-2gZg-how1-kYnq-PAGL-bsfL-qAHg8m
      cephx lockbox secret      
      cluster fsid              b6dc7042-c6ac-11ea-974b-fa163e8d447e
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  fde2284b-1f59-48ab-995f-706a3e49c6bf
      osd id                    5
      osdspec affinity          foo
      type                      block
      vdo                       0
      devices                   /dev/sdc

[1] https://github.com/ceph/ceph/blob/master/src/python-common/ceph/deployment/drive_group.py#L27


Related issues

Related to Orchestrator - Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths" Won't Fix
Related to Orchestrator - Bug #49191: cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no attribute ''paths''"' Duplicate
Related to Orchestrator - Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied New

History

#1 Updated by Sebastian Wagner 10 months ago

  • Category changed from cephadm to cephadm/osd

#2 Updated by Saputro Aryulianto 9 months ago

Dimitri Savineau wrote:

When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the paths drive group attribute [1] because it seems currently ignored but works for data_devices.

[...]

As a result, there's an OSD created on /dev/sdc only without the bluestore DB on /dev/sdd

[...]

[1] https://github.com/ceph/ceph/blob/master/src/python-common/ceph/deployment/drive_group.py#L27

Hi,

there is any update for this issue? since create wal devices by paths attribute is very common and important

#3 Updated by Dimitri Savineau 9 months ago

It looks like it has been decided to ignore this at the moment

https://github.com/ceph/ceph/pull/36543

#4 Updated by Sebastian Wagner 9 months ago

  • Status changed from New to Resolved
  • Target version changed from v16.0.0 to v15.2.5
  • Pull request ID set to 36543

#5 Updated by Dimitri Savineau 9 months ago

@Sebastien : does that mean we need another tracker for implementing this feature or it won't happen at all ?

I thought PR 36543 was only a temporary solution until something was implemented.

#6 Updated by Joshua Schmid 8 months ago

DriveGroups are supposed to `describe` a state/layout without explicitly pointing to disk identifiers.

If there is a specific example where the existing filters are not sufficient to depict the desired cluster layout we can certainly talk about extending the functionality of the drivegroups.

`paths` for data_devices are mainly for testing purposes and should not be used in production clusters unless absolutely unavoidable(for whatever reason).

#7 Updated by Juan Miguel Olmo Martínez 3 months ago

  • Status changed from Resolved to In Progress
  • Assignee set to Juan Miguel Olmo Martínez
  • Priority changed from Normal to High
  • Target version changed from v15.2.5 to v17.0.0
  • Affected Versions deleted (v15.2.4)

#8 Updated by Sebastian Wagner 3 months ago

  • Related to Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths" added

#9 Updated by Sebastian Wagner 3 months ago

  • Related to Bug #49191: cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no attribute ''paths''"' added

#10 Updated by Juan Miguel Olmo Martínez 3 months ago

  • Related to Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied added

#11 Updated by Sebastian Wagner 2 months ago

  • Status changed from In Progress to Pending Backport
  • Pull request ID changed from 36543 to 39415

Also available in: Atom PDF