Project

General

Profile

Bug #43209

When presented a 'dm' c-v should not be recognised if it's a ceph member

Added by Sébastien Han 12 months ago. Updated 12 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Example:

2019-12-09 14:05:17.462753 I | exec: Running command: stdbuf -oL ceph-volume lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/dm-1 --report
2019-12-09 14:05:21.724570 I | 
2019-12-09 14:05:21.724593 I | Total OSDs: 1
2019-12-09 14:05:21.724595 I | 
2019-12-09 14:05:21.724598 I |   Type            Path                                                    LV Size         % of device
2019-12-09 14:05:21.724600 I | ----------------------------------------------------------------------------------------------------
2019-12-09 14:05:21.724602 I |   [data]          /dev/dm-1                                               28.00 GB        100%
2019-12-09 14:05:21.729508 I | exec: Running command: stdbuf -oL ceph-volume lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/dm-1
2019-12-09 14:05:26.008826 I | Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-6fadc4a9-fddc-4efe-85d5-d718fa78e598 /dev/dm-1
2019-12-09 14:05:26.253308 I |  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
2019-12-09 14:05:26.253527 I |  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
2019-12-09 14:05:26.266202 I |  stderr: Device /dev/dm-1 excluded by a filter.
2019-12-09 14:05:26.294780 I | Traceback (most recent call last):
2019-12-09 14:05:26.294804 I |   File "/usr/sbin/ceph-volume", line 9, in <module>
2019-12-09 14:05:26.294808 I |     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
2019-12-09 14:05:26.294811 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 38, in __init__
2019-12-09 14:05:26.294814 I |     self.main(self.argv)
2019-12-09 14:05:26.294817 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
2019-12-09 14:05:26.294820 I |     return f(*a, **kw)
2019-12-09 14:05:26.294823 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 149, in main
2019-12-09 14:05:26.294827 I |     terminal.dispatch(self.mapper, subcommand_args)
2019-12-09 14:05:26.294830 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2019-12-09 14:05:26.294833 I |     instance.main()
2019-12-09 14:05:26.294835 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
2019-12-09 14:05:26.294837 I |     terminal.dispatch(self.mapper, self.argv)
2019-12-09 14:05:26.294841 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2019-12-09 14:05:26.294882 I |     instance.main()
2019-12-09 14:05:26.294899 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
2019-12-09 14:05:26.294934 I |     return func(*a, **kw)
2019-12-09 14:05:26.294938 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main
2019-12-09 14:05:26.294942 I |     self.execute()
2019-12-09 14:05:26.294958 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute
2019-12-09 14:05:26.295026 I |     self.strategy.execute()
2019-12-09 14:05:26.295039 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 101, in execute
2019-12-09 14:05:26.295041 I |     vg = lvm.create_vg(osd['data']['path'])
2019-12-09 14:05:26.295044 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py", line 440, in create_vg
2019-12-09 14:05:26.295130 I |     name] + devices
2019-12-09 14:05:26.295146 I |   File "/usr/lib/python2.7/site-packages/ceph_volume/process.py", line 153, in run
2019-12-09 14:05:26.295148 I |     raise RuntimeError(msg)
2019-12-09 14:05:26.295150 I | RuntimeError: command returned non-zero exit status: 5
failed to configure devices. failed to configure devices with ceph-volume. failed to initialize devices. failed ceph-volume. Failed to complete '': exit status 1.

c-v is not able to recognise the volume:

[root@rook-ceph-osd-0-69f66df89-9grqn /]# ceph-volume lvm list /dev/dm-1
No valid Ceph devices found

And this device is indeed owned by ceph:


[root@rook-ceph-osd-0-69f66df89-9grqn /]# lsblk /dev/dm-2 --bytes --pairs --output NAME,SIZE,TYPE,PKNAME
NAME="ceph--89fa04fa--b93a--4874--9364--c95be3ec01c6-osd--data--70847bdb--2ec1--4874--98ba--d87d4860a70d" SIZE="31138512896" TYPE="lvm" PKNAME="" 

History

#1 Updated by Jan Fajerski 12 months ago

Is this bug for the list sub command?

On the creation code paths these devices are explicitly ignored. That was initially coded like this, not exactly sure why. Maybe Alfredo has more insights?

Also see https://tracker.ceph.com/issues/42502

#2 Updated by Sébastien Han 12 months ago

No, it's not coming from c-v inventory or list. It's coming from a Rook inventory (that I'm fixing at the moment).

But the inventory sees it as available, which is concerning:

[root@rook-ceph-osd-0-69f66df89-9grqn /]# ceph-volume inventory 

Device Path               Size         rotates available Model name
/dev/dm-0                 29.00 GB     True    True      
/dev/dm-1                 29.00 GB     True    True      
/dev/dm-2                 29.00 GB     True    True      
/dev/vda                  18.63 GB     True    False     
/dev/vdb                  30.00 GB     True    False     
/dev/vdc                  30.00 GB     True    False     
/dev/vdd                  30.00 GB     True    False

#3 Updated by Jan Fajerski 12 months ago

Sébastien Han wrote:

No, it's not coming from c-v inventory or list. It's coming from a Rook inventory (that I'm fixing at the moment).

But the inventory sees it as available, which is concerning:

Agreed. I assume this is not a master build? Maybe https://tracker.ceph.com/issues/42777 is related?

#4 Updated by Sébastien Han 12 months ago

Indeed it's ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable).
I can try on master soon.

#5 Updated by Alfredo Deza 12 months ago

I don't think it should recognize it, and this ticket doesn't explain why it should. The batch and lvm commands require a device or an LV, because the code is relying on tooling and processes to determine metadata everywhere. I don't know where or how accepting device mapper paths will break the current implementation but it will certainly break assumptions made everywhere.

My recommendation would be to not use a device mapper directly.

#6 Updated by Sébastien Han 12 months ago

  • Subject changed from when presented a 'dm' c-v should recognise it to when presented a 'dm' c-v should not recognise it if a ceph member already

#7 Updated by Sébastien Han 12 months ago

Alfredo Deza wrote:

I don't think it should recognize it, and this ticket doesn't explain why it should. The batch and lvm commands require a device or an LV, because the code is relying on tooling and processes to determine metadata everywhere. I don't know where or how accepting device mapper paths will break the current implementation but it will certainly break assumptions made everywhere.

My recommendation would be to not use a device mapper directly.

Sorry, I just realized that I made a typo in my bug title. Fixed.

#8 Updated by Sébastien Han 12 months ago

  • Subject changed from when presented a 'dm' c-v should not recognise it if a ceph member already to When presented a 'dm' c-v should not be recognised if it's a ceph member

Also available in: Atom PDF