Project

General

Profile

Bug #48784

Ceph-volume lvm batch fails with AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'

Added by Michal Nasiadka 16 days ago. Updated 1 day ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
pacific, octopus, nautilus
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

I'm creating an OSD using orchestrator:
ceph orch daemon add osd host:/dev/ceph-osd01/ceph-osd01-lv
(I also tried Ceph orch daemon add odd host:ceph-osd1-lv with the same effect)

Output:
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1177, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 318, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 755, in _daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 643, in raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1, stderr:/bin/docker:stderr --> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'
Traceback (most recent call last):
File "<stdin>", line 6113, in <module>
File "<stdin>", line 1300, in _infer_fsid
File "<stdin>", line 1383, in _infer_image
File "<stdin>", line 3613, in command_ceph_volume
File "<stdin>", line 1062, in call_throws
RuntimeError: Failed command: /bin/docker run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15.2.8 -e NODE_NAME=ccp0 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774:/var/run/ceph:z -v /var/log/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774:/var/log/ceph:z -v /var/lib/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmpel7ydyyn:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpyjaly_u4:/var/lib/ceph/bootstrap-osd/ceph.keyring:z docker.io/ceph/ceph:v15.2.8 lvm batch --no-auto /dev/ceph-osd01 --yes --no-systemd

[root@cct1 ~]# ceph orch daemon add osd ccp0:/dev/ceph-osd01^C
[root@cct1 ~]# ceph orch daemon add osd ccp0:ceph-osd01-lv
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1177, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 318, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 755, in _daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 643, in raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1, stderr:/bin/docker:stderr --> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'
Traceback (most recent call last):
File "<stdin>", line 6113, in <module>
File "<stdin>", line 1300, in _infer_fsid
File "<stdin>", line 1383, in _infer_image
File "<stdin>", line 3613, in command_ceph_volume
File "<stdin>", line 1062, in call_throws
RuntimeError: Failed command: /bin/docker run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15.2.8 -e NODE_NAME=ccp0 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774:/var/run/ceph:z -v /var/log/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774:/var/log/ceph:z -v /var/lib/ceph/4aaf9419-b281-4672-9f6d-2f549ac7c774/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp1tthbb8m:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp75katdgz:/var/lib/ceph/bootstrap-osd/ceph.keyring:z docker.io/ceph/ceph:v15.2.8 lvm batch --no-auto ceph-osd01-lv --yes --no-systemd

History

#1 Updated by Jan Fajerski 1 day ago

  • Status changed from New to Fix Under Review
  • Backport set to pacific, octopus, nautilus
  • Pull request ID set to 38869

Also available in: Atom PDF