Project

General

Profile

Bug #22281

ceph-volume lvm create fails at vgcreate on leftover partition table

Added by Alwin Antreich about 1 year ago. Updated about 1 year ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
Start date:
11/30/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

When you try to use a disk that has an empty partition table (leftover), ceph-volume fails on vgcreate. But it doesn't cleanup after itself and leaves the osd in the crush map & auth keys.

root@sumi1:/etc/ceph# parted /dev/sdg print
Model: ATA SAMSUNG MZ7KM240 (scsi)
Disk /dev/sdg: 240GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  Flags

root@sumi1:/etc/ceph# ceph-volume lvm create --bluestore --data /dev/sdg
Running command: sudo vgcreate --force --yes ceph-7602184b-8d09-4dcf-8f91-16c90136f225 /dev/sdg
 stderr: Device /dev/sdg not found (or ignored by filtering).
-->  RuntimeError: command returned non-zero exit status: 5
root@sumi1:/etc/ceph# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF 
-1       3.49139 root default                           
-3       1.30933     host sumi1                         
 0   ssd 0.21819         osd.0      up  1.00000 1.00000 
 3   ssd 0.21819         osd.3      up  1.00000 1.00000 
 6   ssd 0.21819         osd.6      up  1.00000 1.00000 
 9   ssd 0.21819         osd.9      up  1.00000 1.00000 
13   ssd 0.21829         osd.13     up  1.00000 1.00000 
14   ssd 0.21829         osd.14     up  1.00000 1.00000 
-5       1.09103     host sumi2                         
 1   ssd 0.21819         osd.1      up  1.00000 1.00000 
 4   ssd 0.21819         osd.4      up  1.00000 1.00000 
 7   ssd 0.21819         osd.7      up  1.00000 1.00000 
10   ssd 0.21819         osd.10     up  1.00000 1.00000 
16   ssd 0.21829         osd.16     up  1.00000 1.00000 
-7       1.09103     host sumi3                         
 2   ssd 0.21819         osd.2      up  1.00000 1.00000 
 5   ssd 0.21819         osd.5      up  1.00000 1.00000 
 8   ssd 0.21819         osd.8      up  1.00000 1.00000 
11   ssd 0.21819         osd.11     up  1.00000 1.00000 
12   ssd 0.21829         osd.12     up  1.00000 1.00000 
15             0 osd.15           down        0 1.00000 

root@sumi1:/etc/ceph# ceph auth get osd.15
exported keyring for osd.15
[osd.15]
    key = AQBU9x9aAP7hOBAAITpG8cZMOFwGtW88EyHucw==
    caps mgr = "allow profile osd" 
    caps mon = "allow profile osd" 
    caps osd = "allow *" 

History

#1 Updated by Alfredo Deza about 1 year ago

  • Category set to 135
  • Assignee set to Alfredo Deza

#2 Updated by Alwin Antreich about 1 year ago

If you want to add a separate journal/db/wal, it fails with following error, when a leftover partition table is present. Also no cleanup here.

Running command: sudo lvcreate --yes -l 100%FREE -n osd-block-bed1d941-610e-410b-a009-d3d051406815 ceph-f12665b3-71a3-4a39-aed1-ee140939978e
 stdout: Logical volume "osd-block-bed1d941-610e-410b-a009-d3d051406815" created.
--> blkid could not detect a PARTUUID for device: /dev/nvme1n1
-->  RuntimeError: unable to use device

Small nit on the 'ceph-volume lvm create/prepare --help', it says "device", should it maybe say "partition", as you need one to use the block.db/wal/journal option?

  Optionally, can consume db and wal devices or logical volumes:

      ceph-volume lvm prepare --bluestore --data {vg/lv} --block.wal {device} --block-db {vg/lv}

---

  --block.db BLOCK_DB   (bluestore) Path to bluestore block.db logical volume
                        or device
  --block.wal BLOCK_WAL
                        (bluestore) Path to bluestore block.wal logical volume
                        or device

#3 Updated by Alfredo Deza about 1 year ago

We are going to have to purge here, after capturing a failure.

On the help menu nit: not sure if we should say partition because you can also pass in a vg/lv, certainly not a whole raw device though

#4 Updated by Alfredo Deza about 1 year ago

  • Project changed from Ceph to ceph-volume
  • Category deleted (135)

#5 Updated by Alfredo Deza about 1 year ago

  • Status changed from New to In Progress
  • Priority changed from Normal to High

#6 Updated by Alfredo Deza about 1 year ago

  • Status changed from In Progress to Need Review

#7 Updated by Alfredo Deza about 1 year ago

  • Status changed from Need Review to Resolved

merged commit ef1b266 into master

#8 Updated by Alfredo Deza about 1 year ago

pushed to mimic-dev1

Also available in: Atom PDF