Bug #5734
ceph-deploy osd prepare <part1>:<part2> fails, tries to look up /sys/block/<part1>
Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
A user was trying to ceph-deploy osd prepare with two different partitions. Got this:
$ ceph-deploy osd prepare 'tessa:/dev/sda4:/dev/sdb4' ceph-disk-prepare -- /dev/sda4 /dev/sdb4 returned 1 Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 2263, in <module> main() File "/usr/sbin/ceph-disk", line 2252, in main args.func(args) File "/usr/sbin/ceph-disk", line 1084, in main_prepare verify_not_in_use(args.data) File "/usr/sbin/ceph-disk", line 316, in verify_not_in_use for partition in list_partitions(dev): File "/usr/sbin/ceph-disk", line 233, in list_partitions for name in os.listdir(os.path.join('/sys/block', base)): OSError: [Errno 2] No such file or directory: '/sys/block/sda4' ceph-deploy: Failed to create 1 OSDs
It seems to be treating the first partition as a full-device partition.
Related issues
History
#1 Updated by Dan Mick over 10 years ago
- Project changed from Ceph to devops
#2 Updated by Sage Weil over 10 years ago
- Status changed from New to 7
As far as i can tell this is fixed with the ceph-disk in next, and we just haven't backported the fixes yet. I pushed a branch that does that, wip-cuttlefish-ceph-disk. Can you test it out?
ceph-deploy install --dev=wip-cuttlefish-ceph-disk HOST
will get it installed. Thanks!
#3 Updated by Olivier Bonvalet over 10 years ago
I confirm, it works (with and without giving journal). Thanks !
#4 Updated by Sage Weil over 10 years ago
- Status changed from 7 to Resolved
#5 Updated by Sage Weil over 10 years ago
- Status changed from Resolved to Pending Backport