Project

General

Profile

Actions

Bug #56523

open

Cephadm fails to automatically create OSD with shared DB/WAL device

Added by Vladimir Brik almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I followed the procedure https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd to replace a failed disk, but it didn't work, apparently because the OSD in question uses a separate DB/WAL device shared with other OSDs.

Cephadm would attempt to apply the correct drive group, but would fail:

From the cephadm logs:
DEBUG ... cephadm ['--image', ... 'ceph-volume', '--fsid', ..., '--config-json', '-', '--', 'lvm', 'batch', '--no-auto', '/dev/sdv', '--db-devices', '/dev/nvme0n1', '--osd-ids', '494', '--yes', '--no-systemd', '--report', '--format', 'json']
...
DEBUG /bin/podman: --> passed data devices: 1 physical, 0 LVM
DEBUG /bin/podman: --> relative data size: 1.0
DEBUG /bin/podman: --> passed block_db devices: 1 physical, 0 LVM
DEBUG /bin/podman: --> 1 fast devices were passed, but none are available

I assume /dev/nvme0n1 is not "available" because it's shared with other OSDs.

No data to display

Actions

Also available in: Atom PDF