Actions
Bug #24762
closed"ceph-volume scan" does not detect cluster name different than 'ceph'
Status:
Rejected
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Running "ceph-volume scan" on a data partition of an OSD configured with ceph-disk where the cluster name is 'test' reports 'ceph' as a cluster name.
Note that both approaches pointing to the mountpoint AND the data partition provide the same error:
With the mountpoint:
[root@ceph-osd0 ~]# ceph-volume simple scan /var/lib/ceph/osd/test-1/ --force stderr: lsblk: /var/lib/ceph/osd/test-1: not a block device stderr: lsblk: /var/lib/ceph/osd/test-1: not a block device Running command: /usr/sbin/cryptsetup status /dev/sda1 stderr: Device sda1 not found --> OSD 1 got scanned and metadata persisted to file: /etc/ceph/osd/1-a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3.json --> To take over managment of this scanned OSD, and disable ceph-disk and udev, run: --> ceph-volume simple activate 1 a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3 [root@ceph-osd0 ~]# cat /etc/ceph/osd/1-a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3.json { "active": "ok", "ceph_fsid": "b4c56e01-ab71-4b24-93d0-adafd139847f", "cluster_name": "ceph", "data": { "path": "/dev/sda1", "uuid": "a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3" }, "fsid": "a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3", "journal": { "path": "/dev/disk/by-partuuid/e9764491-22b6-4c16-8304-c9320c887d03", "uuid": "e9764491-22b6-4c16-8304-c9320c887d03" }, "journal_uuid": "e9764491-22b6-4c16-8304-c9320c887d03", "keyring": "AQCksTNbKOjxLxAA/EKIN0Zc7Qt5Iux8TeNHlA==", "magic": "ceph osd volume v026", "ready": "ready", "systemd": "", "type": "filestore", "whoami": 1
With the device:
[root@ceph-osd0 ~]# ceph-volume simple scan /dev/sda1 Running command: /usr/sbin/cryptsetup status /dev/sda1 stderr: Device sda1 not found --> OSD 1 got scanned and metadata persisted to file: /etc/ceph/osd/1-a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3.json --> To take over managment of this scanned OSD, and disable ceph-disk and udev, run: --> ceph-volume simple activate 1 a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3 [root@ceph-osd0 ~]# cat /etc/ceph/osd/1-a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3.json { "active": "ok", "ceph_fsid": "b4c56e01-ab71-4b24-93d0-adafd139847f", "cluster_name": "ceph", "data": { "path": "/dev/sda1", "uuid": "a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3" }, "fsid": "a8cfc3cc-1294-4eaa-9e2c-1c3ec93062a3", "journal": { "path": "/dev/disk/by-partuuid/e9764491-22b6-4c16-8304-c9320c887d03", "uuid": "e9764491-22b6-4c16-8304-c9320c887d03" }, "journal_uuid": "e9764491-22b6-4c16-8304-c9320c887d03", "keyring": "AQCksTNbKOjxLxAA/EKIN0Zc7Qt5Iux8TeNHlA==", "magic": "ceph osd volume v026", "ready": "ready", "systemd": "", "type": "filestore", "whoami": 1
Actions