Project

General

Profile

Actions

Bug #63918

open

17.2.7 ceph-volume errors out if no valid s

Added by Jon Sherwood 5 months ago. Updated 5 months ago.

Status:
Pending Backport
Priority:
Normal
Category:
common
Target version:
% Done:

0%

Source:
Community (user)
Tags:
ceph-volume backport_processed
Backport:
quincy reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

It seems ceph-volume is error'ing out if no valid block devices are found on a host. This did not help with 17.2.6. Perhaps initializing the variable would be helpful here?

admin@cephnode:~$ sudo cephadm ceph-volume inventory
Inferring fsid <fsid>
Using ceph image with id '921993c4dfd2' and tag 'v17' created on 2023-11-22 16:03:22 +0000 UTC
quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2
Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 -e NODE_NAME=cephnode -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/<fsid>:/var/run/ceph:z -v /var/log/ceph/<fsid>:/var/log/ceph:z -v /var/lib/ceph/<fsid>/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpne3y5g5u:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 inventory
/usr/bin/docker: stderr Traceback (most recent call last):
/usr/bin/docker: stderr File "/usr/sbin/ceph-volume", line 11, in <module>
/usr/bin/docker: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in init
/usr/bin/docker: stderr self.main(self.argv)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
/usr/bin/docker: stderr return f(*a, **kw)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/docker: stderr instance.main()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 60, in main
/usr/bin/docker: stderr list_all=self.args.list_all))
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 44, in init
/usr/bin/docker: stderr sys_info.devices = disk.get_devices()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py", line 889, in get_devices
/usr/bin/docker: stderr if device_slaves:
/usr/bin/docker: stderr UnboundLocalError: local variable 'device_slaves' referenced before assignment
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 9653, in <module>
main()
File "/usr/sbin/cephadm", line 9641, in main
r = ctx.func(ctx)
File "/usr/sbin/cephadm", line 2153, in _infer_config
return func(ctx)
File "/usr/sbin/cephadm", line 2098, in _infer_fsid
return func(ctx)
File "/usr/sbin/cephadm", line 2181, in _infer_image
return func(ctx)
File "/usr/sbin/cephadm", line 2056, in _validate_fsid
return func(ctx)
File "/usr/sbin/cephadm", line 6254, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR)
File "/usr/sbin/cephadm", line 1853, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 -e NODE_NAME=cephnode -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/<fsid>:/var/run/ceph:z -v /var/log/ceph/<fsid>:/var/log/ceph:z -v /var/lib/ceph/<fsid>/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpne3y5g5u:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 inventory


Related issues 3 (2 open1 closed)

Related to ceph-volume - Feature #62362: support lv devices in inventoryResolvedGuillaume Abrioux

Actions
Copied to Ceph - Backport #63919: quincy: 17.2.7 ceph-volume errors out if no valid sNewGuillaume AbriouxActions
Copied to Ceph - Backport #63920: reef: 17.2.7 ceph-volume errors out if no valid sNewGuillaume AbriouxActions
Actions

Also available in: Atom PDF