Bug #21493
closedceph-disk prepare cannot find paritition that does exist
0%
Description
When using ceph-disk prepare as provided by ceph-osd-10.2.9-0.el7.x86_64 on /dev/vdb, which exists on my system, I get a stack trace [1] and see [Errno 2] No such file or directory: '/dev/vdb1'. Unfortunately it seems intermittent. For example, if I zap the device and re-run prepare, then I don't see this error but it does recur and will fail a ceph-ansible deployment. I am using ceph-docker [2].
[1]
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 9, in <module>
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5095, in run
main(sys.argv[1:])
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5046, in main
args.func(args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1855, in main
Prepare.factory(args).prepare()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1844, in prepare
self.prepare_locked()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1875, in prepare_locked
self.data.prepare(self.journal)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2542, in prepare
self.prepare_device(*to_prepare_list)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2705, in prepare_device
self.set_data_partition()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2632, in set_data_partition
self.partition = self.create_data_partition()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2616, in create_data_partition
return device.get_partition(partition_number)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1622, in get_partition
path=self.path, dev=dev, args=self.args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1685, in factory
(dev is not None and is_mpath(dev))):
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 535, in is_mpath
uuid = get_dm_uuid(dev)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 522, in get_dm_uuid
uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 516, in block_path
rdev = os.stat(path).st_rdev
[2] Workaround: zap and re-run:
docker run -ti --privileged=true -v /dev/:/dev/ -e OSD_DEVICE=/dev/vdb docker.io/ceph/daemon:tag-build-master-jewel-centos-7 zap_device
docker start ceph-osd-prepare-overcloud-cephstorage-1-devdevvdb; docker logs -f ceph-osd-prepare-overcloud-cephstorage-1-devdevvdb
[root@overcloud-cephstorage-1 ~]# docker run -ti --entrypoint=bash docker.io/ceph/daemon:tag-build-master-jewel-centos-7
[root@e902236a00de /]#
[root@e902236a00de /]# rpm -qa | grep ceph
libcephfs1-10.2.9-0.el7.x86_64
python-cephfs-10.2.9-0.el7.x86_64
ceph-base-10.2.9-0.el7.x86_64
ceph-osd-10.2.9-0.el7.x86_64
ceph-radosgw-10.2.9-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-common-10.2.9-0.el7.x86_64
ceph-selinux-10.2.9-0.el7.x86_64
ceph-mds-10.2.9-0.el7.x86_64
ceph-mon-10.2.9-0.el7.x86_64
[root@e902236a00de /]#