Bug #36589
ceph-volume: generate bad clustername in /etc/ceph/osd files by default.
0%
Description
I'm on ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
I have tried to convert a ceph-disk bluestore to ceph-volume.
The cluster is called 'test'
# ceph-volume simple scan /dev/sdb1 Running command: /usr/sbin/cryptsetup status /dev/sdb1 stderr: Device sdb1 not found --> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2.json --> To take over managment of this scanned OSD, and disable ceph-disk and udev, run: --> ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2 # ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2 Running command: /bin/mount -v /dev/sdb1 /var/lib/ceph/osd/ceph-0 stderr: mount: mount point /var/lib/ceph/osd/ceph-0 does not exist --> RuntimeError: command returned non-zero exit status: 32
Passing the cluster name on activate have no effect:
# ceph-volume --cluster test simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2 Running command: /bin/mount -v /dev/sdb1 /var/lib/ceph/osd/ceph-0 stderr: mount: mount point /var/lib/ceph/osd/ceph-0 does not exist --> RuntimeError: command returned non-zero exit status: 32
The json must be recreated with the cluster name as parameter and then it can works
- ceph-volume --cluster test simple scan /dev/sdb1 --force
Running command: /usr/sbin/cryptsetup status /dev/sdb1
stderr: Device sdb1 not found
--> OSD 0 got scanned and metadata persisted to file: /etc/ceph/osd/0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2.json
--> To take over managment of this scanned OSD, and disable ceph-disk and udev, run:
--> ceph-volume simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
- ceph-volume --cluster test simple activate 0 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
Running command: /bin/ln -snf /dev/sdc2 /var/lib/ceph/osd/test-0/block.wal
Running command: /bin/chown -R ceph:ceph /dev/sdc2
Running command: /bin/ln -snf /dev/sdc1 /var/lib/ceph/osd/test-0/block.db
Running command: /bin/chown -R ceph:ceph /dev/sdc1
Running command: /bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/test-0/block
Running command: /bin/chown -R ceph:ceph /dev/sdb2
Running command: /bin/systemctl enable ceph-volume@simple-0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@simple-0-4e01d64c-82d0-4bde-a9b4-db82cb4413c2.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: /bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
Running command: /bin/systemctl enable --runtime ceph-osd@0
Running command: /bin/systemctl start ceph-osd@0
--> Successfully activated OSD 0 with FSID 4e01d64c-82d0-4bde-a9b4-db82cb4413c2
--> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events
ceph-volume blindly put the clustername passed as parameter in json files generated.
ceph-volume should use the osd directory name to discover the cluster name automatically.
History
#1 Updated by Mehdi Abaakouk over 5 years ago
- Subject changed from ceph-volume: generate bad /etc/ceph/osd files by default. to ceph-volume: generate bad clustername in /etc/ceph/osd files by default.
#2 Updated by Mehdi Abaakouk over 5 years ago
- Project changed from Ceph to ceph-volume
#3 Updated by Mehdi Abaakouk over 5 years ago
- Assignee set to Mehdi Abaakouk
#4 Updated by Alfredo Deza over 5 years ago
- Status changed from New to Rejected
An admin that deploys Ceph with a custom cluster name is required to use --cluster=name
everywhere. There is no way to programmatically scan a ceph-disk provisioned OSD and understand what cluster name it has. It cannot use the "directory name" as suggested because we are able to scan unmounted devices, or mounted devices in non-standard locations.
We default to 'ceph' as the cluster name, just like everywhere else unless specified otherwise.