Project

General

Profile

Actions

Bug #46558

closed

cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec

Added by Dimitri Savineau almost 4 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
High
Category:
cephadm/osd
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the paths drive group attribute [1] because it seems currently ignored but works for data_devices.

# cat osds.yml
service_type: osd
service_id: foo
placement:
  label: osds
data_devices:
  paths:
    - /dev/sdc
db_devices:
  paths:
    - /dev/sdd
encrypted: false
# cephadm shell -m osds.yml -- ceph orch apply osd -i /mnt/osds.yml
INFO:cephadm:Inferring fsid b6dc7042-c6ac-11ea-974b-fa163e8d447e
INFO:cephadm:Inferring config /var/lib/ceph/b6dc7042-c6ac-11ea-974b-fa163e8d447e/mon.ofgnapinv-1/config
INFO:cephadm:Using recent ceph image docker.io/ceph/daemon-base:latest-master
WARNING: The same type, major and minor should not be used for multiple devices.
Scheduled osd.foo update...

As a result, there's an OSD created on /dev/sdc only without the bluestore DB on /dev/sdd

# cephadm shell -- ceph-volume lvm list
INFO:cephadm:Inferring fsid b6dc7042-c6ac-11ea-974b-fa163e8d447e
INFO:cephadm:Inferring config /var/lib/ceph/b6dc7042-c6ac-11ea-974b-fa163e8d447e/mon.ofgnapinv-1/config
INFO:cephadm:Using recent ceph image docker.io/ceph/daemon-base:latest-master
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.
WARNING: The same type, major and minor should not be used for multiple devices.

====== osd.5 =======

  [block]       /dev/ceph-bd41b3ba-23a0-4e03-bb1c-5e17f89136a1/osd-block-fde2284b-1f59-48ab-995f-706a3e49c6bf

      block device              /dev/ceph-bd41b3ba-23a0-4e03-bb1c-5e17f89136a1/osd-block-fde2284b-1f59-48ab-995f-706a3e49c6bf
      block uuid                avv5vs-2gZg-how1-kYnq-PAGL-bsfL-qAHg8m
      cephx lockbox secret      
      cluster fsid              b6dc7042-c6ac-11ea-974b-fa163e8d447e
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  fde2284b-1f59-48ab-995f-706a3e49c6bf
      osd id                    5
      osdspec affinity          foo
      type                      block
      vdo                       0
      devices                   /dev/sdc

[1] https://github.com/ceph/ceph/blob/master/src/python-common/ceph/deployment/drive_group.py#L27


Related issues 3 (0 open3 closed)

Related to Orchestrator - Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths"Won't Fix

Actions
Related to Orchestrator - Bug #49191: cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no attribute ''paths''"'Duplicate

Actions
Related to Orchestrator - Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters appliedCan't reproduceJuan Miguel Olmo Martínez

Actions
Actions #1

Updated by Sebastian Wagner over 3 years ago

  • Category changed from cephadm to cephadm/osd
Actions #2

Updated by Saputro Aryulianto over 3 years ago

Dimitri Savineau wrote:

When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the paths drive group attribute [1] because it seems currently ignored but works for data_devices.

[...]

As a result, there's an OSD created on /dev/sdc only without the bluestore DB on /dev/sdd

[...]

[1] https://github.com/ceph/ceph/blob/master/src/python-common/ceph/deployment/drive_group.py#L27

Hi,

there is any update for this issue? since create wal devices by paths attribute is very common and important

Actions #3

Updated by Dimitri Savineau over 3 years ago

It looks like it has been decided to ignore this at the moment

https://github.com/ceph/ceph/pull/36543

Actions #4

Updated by Sebastian Wagner over 3 years ago

  • Status changed from New to Resolved
  • Target version changed from v16.0.0 to v15.2.5
  • Pull request ID set to 36543
Actions #5

Updated by Dimitri Savineau over 3 years ago

@Sebastien : does that mean we need another tracker for implementing this feature or it won't happen at all ?

I thought PR 36543 was only a temporary solution until something was implemented.

Actions #6

Updated by Joshua Schmid over 3 years ago

DriveGroups are supposed to `describe` a state/layout without explicitly pointing to disk identifiers.

If there is a specific example where the existing filters are not sufficient to depict the desired cluster layout we can certainly talk about extending the functionality of the drivegroups.

`paths` for data_devices are mainly for testing purposes and should not be used in production clusters unless absolutely unavoidable(for whatever reason).

Actions #7

Updated by Juan Miguel Olmo Martínez about 3 years ago

  • Status changed from Resolved to In Progress
  • Assignee set to Juan Miguel Olmo Martínez
  • Priority changed from Normal to High
  • Target version changed from v15.2.5 to v17.0.0
  • Affected Versions deleted (v15.2.4)
Actions #8

Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #44738: drivegroups/cephadm: db_devices don't get applied correctly when using "paths" added
Actions #9

Updated by Sebastian Wagner about 3 years ago

  • Related to Bug #49191: cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no attribute ''paths''"' added
Actions #10

Updated by Juan Miguel Olmo Martínez about 3 years ago

  • Related to Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied added
Actions #11

Updated by Sebastian Wagner about 3 years ago

  • Status changed from In Progress to Pending Backport
  • Pull request ID changed from 36543 to 39415
Actions #12

Updated by Sebastian Wagner almost 3 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF