Project

General

Profile

Actions

Bug #22281

closed

ceph-volume lvm create fails at vgcreate on leftover partition table

Added by Alwin Antreich over 6 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When you try to use a disk that has an empty partition table (leftover), ceph-volume fails on vgcreate. But it doesn't cleanup after itself and leaves the osd in the crush map & auth keys.

root@sumi1:/etc/ceph# parted /dev/sdg print
Model: ATA SAMSUNG MZ7KM240 (scsi)
Disk /dev/sdg: 240GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  Flags

root@sumi1:/etc/ceph# ceph-volume lvm create --bluestore --data /dev/sdg
Running command: sudo vgcreate --force --yes ceph-7602184b-8d09-4dcf-8f91-16c90136f225 /dev/sdg
 stderr: Device /dev/sdg not found (or ignored by filtering).
-->  RuntimeError: command returned non-zero exit status: 5
root@sumi1:/etc/ceph# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF 
-1       3.49139 root default                           
-3       1.30933     host sumi1                         
 0   ssd 0.21819         osd.0      up  1.00000 1.00000 
 3   ssd 0.21819         osd.3      up  1.00000 1.00000 
 6   ssd 0.21819         osd.6      up  1.00000 1.00000 
 9   ssd 0.21819         osd.9      up  1.00000 1.00000 
13   ssd 0.21829         osd.13     up  1.00000 1.00000 
14   ssd 0.21829         osd.14     up  1.00000 1.00000 
-5       1.09103     host sumi2                         
 1   ssd 0.21819         osd.1      up  1.00000 1.00000 
 4   ssd 0.21819         osd.4      up  1.00000 1.00000 
 7   ssd 0.21819         osd.7      up  1.00000 1.00000 
10   ssd 0.21819         osd.10     up  1.00000 1.00000 
16   ssd 0.21829         osd.16     up  1.00000 1.00000 
-7       1.09103     host sumi3                         
 2   ssd 0.21819         osd.2      up  1.00000 1.00000 
 5   ssd 0.21819         osd.5      up  1.00000 1.00000 
 8   ssd 0.21819         osd.8      up  1.00000 1.00000 
11   ssd 0.21819         osd.11     up  1.00000 1.00000 
12   ssd 0.21829         osd.12     up  1.00000 1.00000 
15             0 osd.15           down        0 1.00000 

root@sumi1:/etc/ceph# ceph auth get osd.15
exported keyring for osd.15
[osd.15]
    key = AQBU9x9aAP7hOBAAITpG8cZMOFwGtW88EyHucw==
    caps mgr = "allow profile osd" 
    caps mon = "allow profile osd" 
    caps osd = "allow *" 
Actions

Also available in: Atom PDF