Bug #22305

Updated by Alfredo Deza over 5 years ago

ceph-volume lvm prepare --bluestore --data osd.9/osd.9

[2017-12-01 17:25:25,279][ceph_volume.process][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 063c7de3-d4b2-463b-9f56-7a76b0b48197
[2017-12-01 17:25:25,940][ceph_volume.process][INFO ] stdout 25
[2017-12-01 17:25:25,940][ceph_volume.process][INFO ] Running command: sudo lvs --noheadings --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid
[2017-12-01 17:25:25,977][ceph_volume.process][INFO ] stdout ";"/dev/LVM0/CEPH";"CEPH";"LVM0";"y4Al1c-SFHH-VARl-XQf3-Qsc8-H3MN-LLIIj4
[2017-12-01 17:25:25,978][ceph_volume.process][INFO ] stdout ";"/dev/LVM0/ROOT";"ROOT";"LVM0";"31V3cd-E2b1-LcDz-2loq-egvh-lz4e-3u20ZN
[2017-12-01 17:25:25,978][ceph_volume.process][INFO ] stdout ";"/dev/LVM0/SWAP";"SWAP";"LVM0";"hI3cNL-sddl-yXFB-BOXT-5R6j-fDtZ-kNixYa
[2017-12-01 17:25:25,978][ceph_volume.process][INFO ] stdout d77bfa9f-4d8d-40df-852a-692a94929ed2";"/dev/osd.9/osd.9";"osd.9";"osd.9";"3NAmK8-U3Fx-KUOm-f8x8-aEtO-MbYh-uPGHhR
[2017-12-01 17:25:25,979][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ceph_volume/", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python2.7/dist-packages/ceph_volume/", line 144, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python2.7/dist-packages/ceph_volume/", line 131, in dispatch
File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/", line 38, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python2.7/dist-packages/ceph_volume/", line 131, in dispatch
File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/", line 293, in main
File "/usr/lib/python2.7/dist-packages/ceph_volume/", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/", line 206, in prepare
block_lv = self.get_lv(
File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/", line 102, in get_lv
return api.get_lv(lv_name=lv_name, vg_name=vg_name)
File "/usr/lib/python2.7/dist-packages/ceph_volume/api/", line 162, in get_lv
lvs = Volumes()
File "/usr/lib/python2.7/dist-packages/ceph_volume/api/", line 411, in __init__
File "/usr/lib/python2.7/dist-packages/ceph_volume/api/", line 416, in _populate
File "/usr/lib/python2.7/dist-packages/ceph_volume/api/", line 638, in __init__
self.tags = parse_tags(kw['lv_tags'])
File "/usr/lib/python2.7/dist-packages/ceph_volume/api/", line 66, in parse_tags
key, value = tag_assignment.split('=', 1)
ValueError: need more than 1 value to unpack

Additional info:
--- Volume group ---
VG Name osd.9
--- Logical volume ---
LV Path /dev/osd.9/osd.9
LV Name osd.9
VG Name osd.9
lvs -o vg_tags /dev/osd.9/osd.9
VG Tags
lvs -o lv_tags /dev/osd.9/osd.9
LV Tags

Use case:
Just like ceph-volume we use LVM tags to identify a know UUID (which corresponds to a disk in our setup).

Alfredo Deza commented on this mailinglist thread:
"Looks like there is a tag in there that broke it. Lets follow up on a
tracker issue so that we don't hijack this thread?"

Ideally ceph-volume is able to handle multiple (already existing) tags. LVM VG / LV supports multiple tags.