Actions
Bug #6979
closedceph-disk breaks when sgdisk is not available
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
$ sudo ceph-disk list Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 2346, in <module> main() File "/usr/sbin/ceph-disk", line 2335, in main args.func(args) File "/usr/sbin/ceph-disk", line 2012, in main_list part_uuid = get_partition_uuid(dev) File "/usr/sbin/ceph-disk", line 1922, in get_partition_uuid stderr = subprocess.PIPE).stdout.read() File "/usr/lib/python2.7/subprocess.py", line 709, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1326, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory $ ceph -v ceph version 0.72-447-g096f9b3 (096f9b3268b998226f1bf081e56bf28a0573121d)
Looks to be caused by lack of sgdisk:
Line 1922 of ceph-disk is highlighted.
def get_partition_uuid(dev): (base, partnum) = re.match('(\D+)(\d+)', dev).group(1, 2) out = subprocess.Popen( [ 'sgdisk', '-i', partnum, base ], stdout = subprocess.PIPE, stderr = subprocess.PIPE).stdout.read() <======= for line in out.splitlines(): m = re.match('Partition unique GUID: (\S+)', line) if m: return m.group(1).lower() return None
(installing sgdisk fixed it).
Updated by Alfredo Deza over 10 years ago
- Status changed from 12 to Fix Under Review
Pull request opened: https://github.com/ceph/ceph/pull/932
Updated by Mark Kirkwood over 10 years ago
This was covered in the email list, but figured it would make sense to put in here too: It seems likely that this can only happen for custom installs (from source in particular) as the (e.g Debian) Ceph packages have gdisk as a dependency.
I encountered this only on the machine where I compile and install ceph from source.
Updated by Sage Weil over 10 years ago
- Status changed from Fix Under Review to Resolved
Actions