Project

General

Profile

Actions

Bug #24790

closed

Can not activate data partition on bluestore

Added by Sébastien Han almost 6 years ago. Updated almost 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

OSD was configured using:

ceph-volume lvm prepare --no-systemd --bluestore --data /dev/sda

The activation was successful and I could run it multiple times without issues:

[root@ceph-osd0 /]# ceph-volume lvm activate --no-systemd /dev/sda
Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c8643732-44a6-4db4-9cde-b9b9a4487e1b/osd-block-1ac9a173-4a83-4dc4-b1a1-4343f72236c3 --path /var/lib/ceph/osd/ceph-0
Running command: /bin/ln snf /dev/ceph-c8643732-44a6-4db4-9cde-b9b9a4487e1b/osd-block-1ac9a173-4a83-4dc4-b1a1-4343f72236c3 /var/lib/ceph/osd/ceph-0/block
Running command: /bin/chown -R ceph:ceph /dev/mapper/ceph--c8643732--44a6--4db4--9cde--b9b9a4487e1b-osd--block--1ac9a173--4a83--4dc4--b1a1--4343f72236c3
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
-
> ceph-volume lvm activate successful for osd ID: 0

Then I created a new OSD on the same box and I got:

[2018-07-06 12:11:37,484][ceph_volume.main][INFO ] Running command: ceph-volume lvm activate --no-systemd /dev/sda
[2018-07-06 12:11:37,485][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid[2018-07-06 12:11:37,712][ceph_volume.process][INFO ] stdout ";"/dev/VolGroup00/LogVol00";"LogVol00";"VolGroup00";"gqFct1-q2eW-5I7a-IQl2-g2Vm-Nsiq-B2CqHo[2018-07-06 12:11:37,713][ceph_volume.process][INFO ] stdout ";"/dev/VolGroup00/LogVol01";"LogVol01";"VolGroup00";"T8lHub-uZnx-CX9A-354Z-Jd3P-fTPq-dve374
[2018-07-06 12:11:37,713][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-b7c0fcd0-eede-4169-a874-43c730c7c9a9/osd-block-2751c5d7-20a7-4d58-b3bc-e1609694c260,ceph.block_uuid=7JpHAu-nlpf-Yr8V-3Yiz-s5zz-5OdB-fcI7wW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=0a9374ec-2998-47df-9a7f-d030e5b9c261,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=2751c5d7-20a7-4d58-b3bc-e1609694c260,ceph.osd_id=1,ceph.type=block,ceph.vdo=0";"/dev/ceph-b7c0fcd0-eede-4169-a874-43c730c7c9a9/osd-block-2751c5d7-20a7-4d58-b3bc-e1609694c260";"osd-block-2751c5d7-20a7-4d58-b3bc-e1609694c260";"ceph-b7c0fcd0-eede-4169-a874-43c730c7c9a9";"7JpHAu-nlpf-Yr8V-3Yiz-s5zz-5OdB-fcI7wW
[2018-07-06 12:11:37,713][ceph_volume.process][INFO ] stdout ceph.block_device=/dev/ceph-c8643732-44a6-4db4-9cde-b9b9a4487e1b/osd-block-1ac9a173-4a83-4dc4-b1a1-4343f72236c3,ceph.block_uuid=KkDdPs-A5YR-3qeQ-CfuF-5jNE-FBTj-hcfRyH,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=0a9374ec-2998-47df-9a7f-d030e5b9c261,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=1ac9a173-4a83-4dc4-b1a1-4343f72236c3,ceph.osd_id=0,ceph.type=block,ceph.vdo=0";"/dev/ceph-c8643732-44a6-4db4-9cde-b9b9a4487e1b/osd-block-1ac9a173-4a83-4dc4-b1a1-4343f72236c3";"osd-block-1ac9a173-4a83-4dc4-b1a1-4343f72236c3";"ceph-c8643732-44a6-4db4-9cde-b9b9a4487e1b";"KkDdPs-A5YR-3qeQ-CfuF-5jNE-FBTj-hcfRyH
[2018-07-06 12:11:37,713][ceph_volume][ERROR ] exception caught by decoratorTraceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 153, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 182, in dispatch
instance.main()
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py", line 38, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 182, in dispatch
instance.main()
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/activate.py", line 318, in main
self.activate(args)
File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/activate.py", line 242, in activate
activate_bluestore(lvs, no_systemd=args.no_systemd)
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/activate.py", line 117, in activate_bluestore
osd_lv = lvs.get(lv_tags={'ceph.type': 'block'})
File "/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py", line 632, in get
raise MultipleLVsError(lv_name, lv_path)
MultipleLVsError: Got more than 1 result looking for None with path: None

Actions #1

Updated by Alfredo Deza almost 6 years ago

  • Status changed from New to 4

I think this is related to issue #24791

You aren't supposed to be allowed to pass in '/dev/sda'. What happens here is that, because it is accepting the string, there is no further filtering down to narrow down on the OSD so multiple
LVs are matched. It should work if you do this:

ceph-volume lvm activate 0 2751c5d7-20a7-4d58-b3bc-e1609694c260

Or if you do:

ceph-volume lvm activate --all

Can you confirm this is the case? If so, I would like to close this in favor of #24791 which would fix this as well

Actions #2

Updated by Sébastien Han almost 6 years ago

Correct: ceph-volume lvm activate 0 1ac9a173-4a83-4dc4-b1a1-4343f72236c3 --no-systemd

Works.

Same for ceph-volume lvm activate --all --no-systemd.

We can close this in favour of #24791.

Thanks

Actions #3

Updated by Alfredo Deza almost 6 years ago

  • Status changed from 4 to Closed
Actions #4

Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #44682: weird osd create failure added
Actions #5

Updated by Jan Fajerski over 3 years ago

  • Related to deleted (Bug #44682: weird osd create failure)
Actions

Also available in: Atom PDF