Bug #22281
closedceph-volume lvm create fails at vgcreate on leftover partition table
0%
Description
When you try to use a disk that has an empty partition table (leftover), ceph-volume fails on vgcreate. But it doesn't cleanup after itself and leaves the osd in the crush map & auth keys.
root@sumi1:/etc/ceph# parted /dev/sdg print Model: ATA SAMSUNG MZ7KM240 (scsi) Disk /dev/sdg: 240GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags
root@sumi1:/etc/ceph# ceph-volume lvm create --bluestore --data /dev/sdg Running command: sudo vgcreate --force --yes ceph-7602184b-8d09-4dcf-8f91-16c90136f225 /dev/sdg stderr: Device /dev/sdg not found (or ignored by filtering). --> RuntimeError: command returned non-zero exit status: 5
root@sumi1:/etc/ceph# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 3.49139 root default -3 1.30933 host sumi1 0 ssd 0.21819 osd.0 up 1.00000 1.00000 3 ssd 0.21819 osd.3 up 1.00000 1.00000 6 ssd 0.21819 osd.6 up 1.00000 1.00000 9 ssd 0.21819 osd.9 up 1.00000 1.00000 13 ssd 0.21829 osd.13 up 1.00000 1.00000 14 ssd 0.21829 osd.14 up 1.00000 1.00000 -5 1.09103 host sumi2 1 ssd 0.21819 osd.1 up 1.00000 1.00000 4 ssd 0.21819 osd.4 up 1.00000 1.00000 7 ssd 0.21819 osd.7 up 1.00000 1.00000 10 ssd 0.21819 osd.10 up 1.00000 1.00000 16 ssd 0.21829 osd.16 up 1.00000 1.00000 -7 1.09103 host sumi3 2 ssd 0.21819 osd.2 up 1.00000 1.00000 5 ssd 0.21819 osd.5 up 1.00000 1.00000 8 ssd 0.21819 osd.8 up 1.00000 1.00000 11 ssd 0.21819 osd.11 up 1.00000 1.00000 12 ssd 0.21829 osd.12 up 1.00000 1.00000 15 0 osd.15 down 0 1.00000 root@sumi1:/etc/ceph# ceph auth get osd.15 exported keyring for osd.15 [osd.15] key = AQBU9x9aAP7hOBAAITpG8cZMOFwGtW88EyHucw== caps mgr = "allow profile osd" caps mon = "allow profile osd" caps osd = "allow *"
Updated by Alfredo Deza over 6 years ago
- Category set to 135
- Assignee set to Alfredo Deza
Updated by Alwin Antreich over 6 years ago
If you want to add a separate journal/db/wal, it fails with following error, when a leftover partition table is present. Also no cleanup here.
Running command: sudo lvcreate --yes -l 100%FREE -n osd-block-bed1d941-610e-410b-a009-d3d051406815 ceph-f12665b3-71a3-4a39-aed1-ee140939978e stdout: Logical volume "osd-block-bed1d941-610e-410b-a009-d3d051406815" created. --> blkid could not detect a PARTUUID for device: /dev/nvme1n1 --> RuntimeError: unable to use device
Small nit on the 'ceph-volume lvm create/prepare --help', it says "device", should it maybe say "partition", as you need one to use the block.db/wal/journal option?
Optionally, can consume db and wal devices or logical volumes: ceph-volume lvm prepare --bluestore --data {vg/lv} --block.wal {device} --block-db {vg/lv} --- --block.db BLOCK_DB (bluestore) Path to bluestore block.db logical volume or device --block.wal BLOCK_WAL (bluestore) Path to bluestore block.wal logical volume or device
Updated by Alfredo Deza over 6 years ago
We are going to have to purge here, after capturing a failure.
On the help menu nit: not sure if we should say partition because you can also pass in a vg/lv, certainly not a whole raw device though
Updated by Alfredo Deza over 6 years ago
- Project changed from Ceph to ceph-volume
- Category deleted (
135)
Updated by Alfredo Deza over 6 years ago
- Status changed from New to In Progress
- Priority changed from Normal to High
Updated by Alfredo Deza over 6 years ago
- Status changed from In Progress to Fix Under Review
PR open at https://github.com/ceph/ceph/pull/19351
Updated by Alfredo Deza over 6 years ago
- Status changed from Fix Under Review to Resolved
merged commit ef1b266 into master
Updated by Alfredo Deza over 6 years ago
Luminous backport https://github.com/ceph/ceph/pull/19531