Project

General

Profile

Actions

Bug #36589

closed

ceph-volume: generate bad clustername in /etc/ceph/osd files by default.

Added by Mehdi Abaakouk over 5 years ago. Updated over 5 years ago.

Status:
Rejected
Priority:
Low
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm on ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)

I have tried to convert a ceph-disk bluestore to ceph-volume.

The cluster is called 'test'

# ceph-volume simple scan /dev/sdb1
Running command: /usr/sbin/cryptsetup status /dev/sdb1
 stderr: Device sdb1 not found
--> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2.json
--> To take over managment of this scanned OSD, and disable ceph-disk and udev, run:
-->     ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2

# ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
Running command: /bin/mount -v /dev/sdb1 /var/lib/ceph/osd/ceph-0
 stderr: mount: mount point /var/lib/ceph/osd/ceph-0 does not exist
-->  RuntimeError: command returned non-zero exit status: 32

Passing the cluster name on activate have no effect:

# ceph-volume --cluster test simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
Running command: /bin/mount -v /dev/sdb1 /var/lib/ceph/osd/ceph-0
 stderr: mount: mount point /var/lib/ceph/osd/ceph-0 does not exist
-->  RuntimeError: command returned non-zero exit status: 32

The json must be recreated with the cluster name as parameter and then it can works

  1. ceph-volume --cluster test simple scan /dev/sdb1 --force
    Running command: /usr/sbin/cryptsetup status /dev/sdb1
    stderr: Device sdb1 not found
    --> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2.json
    --> To take over managment of this scanned OSD, and disable ceph-disk and udev, run:
    --> ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
  1. ceph-volume --cluster test simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
    Running command: /bin/ln snf /dev/sdc2 /var/lib/ceph/osd/test-0/block.wal
    Running command: /bin/chown -R ceph:ceph /dev/sdc2
    Running command: /bin/ln -snf /dev/sdc1 /var/lib/ceph/osd/test-0/block.db
    Running command: /bin/chown -R ceph:ceph /dev/sdc1
    Running command: /bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/test-0/block
    Running command: /bin/chown -R ceph:ceph /dev/sdb2
    Running command: /bin/systemctl enable ceph-volume@simple-0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2
    stderr: Created symlink from to /usr/lib/systemd/system/ceph-volume@.service.
    Running command: /bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
    Running command: /bin/systemctl enable --runtime ceph-osd@0
    Running command: /bin/systemctl start ceph-osd@0
    -
    > Successfully activated OSD 0 with FSID 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
    --> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events

ceph-volume blindly put the clustername passed as parameter in json files generated.

ceph-volume should use the osd directory name to discover the cluster name automatically.

Actions

Also available in: Atom PDF