Project

General

Profile

Actions

Bug #63918

open

17.2.7 ceph-volume errors out if no valid s

Added by Jon Sherwood 4 months ago. Updated 4 months ago.

Status:
Pending Backport
Priority:
Normal
Category:
common
Target version:
% Done:

0%

Source:
Community (user)
Tags:
ceph-volume backport_processed
Backport:
quincy reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

It seems ceph-volume is error'ing out if no valid block devices are found on a host. This did not help with 17.2.6. Perhaps initializing the variable would be helpful here?

admin@cephnode:~$ sudo cephadm ceph-volume inventory
Inferring fsid <fsid>
Using ceph image with id '921993c4dfd2' and tag 'v17' created on 2023-11-22 16:03:22 +0000 UTC
quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2
Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 -e NODE_NAME=cephnode -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/<fsid>:/var/run/ceph:z -v /var/log/ceph/<fsid>:/var/log/ceph:z -v /var/lib/ceph/<fsid>/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpne3y5g5u:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 inventory
/usr/bin/docker: stderr Traceback (most recent call last):
/usr/bin/docker: stderr File "/usr/sbin/ceph-volume", line 11, in <module>
/usr/bin/docker: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in init
/usr/bin/docker: stderr self.main(self.argv)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
/usr/bin/docker: stderr return f(*a, **kw)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
/usr/bin/docker: stderr terminal.dispatch(self.mapper, subcommand_args)
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/docker: stderr instance.main()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 60, in main
/usr/bin/docker: stderr list_all=self.args.list_all))
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 44, in init
/usr/bin/docker: stderr sys_info.devices = disk.get_devices()
/usr/bin/docker: stderr File "/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py", line 889, in get_devices
/usr/bin/docker: stderr if device_slaves:
/usr/bin/docker: stderr UnboundLocalError: local variable 'device_slaves' referenced before assignment
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 9653, in <module>
main()
File "/usr/sbin/cephadm", line 9641, in main
r = ctx.func(ctx)
File "/usr/sbin/cephadm", line 2153, in _infer_config
return func(ctx)
File "/usr/sbin/cephadm", line 2098, in _infer_fsid
return func(ctx)
File "/usr/sbin/cephadm", line 2181, in _infer_image
return func(ctx)
File "/usr/sbin/cephadm", line 2056, in _validate_fsid
return func(ctx)
File "/usr/sbin/cephadm", line 6254, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR)
File "/usr/sbin/cephadm", line 1853, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 -e NODE_NAME=cephnode -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/<fsid>:/var/run/ceph:z -v /var/log/ceph/<fsid>:/var/log/ceph:z -v /var/lib/ceph/<fsid>/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpne3y5g5u:/etc/ceph/ceph.conf:z quay.io/ceph/ceph@sha256:dad2876c2916b732d060b71320f97111bc961108f9c249f4daa9540957a2b6a2 inventory


Related issues 3 (2 open1 closed)

Related to ceph-volume - Feature #62362: support lv devices in inventoryResolvedGuillaume Abrioux

Actions
Copied to Ceph - Backport #63919: quincy: 17.2.7 ceph-volume errors out if no valid sNewGuillaume AbriouxActions
Copied to Ceph - Backport #63920: reef: 17.2.7 ceph-volume errors out if no valid sNewGuillaume AbriouxActions
Actions #1

Updated by Jon Sherwood 4 months ago

lsblk output from host that was scanned:

admin@cephnode:~$ sudo lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    1 447.1G  0 disk 
├─sda1   8:1    1   512M  0 part 
├─sda2   8:2    1  64.8M  0 part 
├─sda3   8:3    1     2G  0 part 
└─sda4   8:4    1 444.6G  0 part 
sdb      8:16   1 447.1G  0 disk 
├─sdb1   8:17   1   512M  0 part /boot/grub
│                                /boot/efi
├─sdb2   8:18   1  64.8M  0 part 
├─sdb3   8:19   1     2G  0 part 
└─sdb4   8:20   1 444.6G  0 part 
Actions #2

Updated by Jon Sherwood 4 months ago

Also, possibly important this uses zpools for boot drive:

admin@cephnode:~$ sudo zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool      1.88G   425M  1.46G        -         -     4%    22%  1.00x    ONLINE  -
  mirror-0  1.88G   425M  1.46G        -         -     4%  22.1%      -    ONLINE
    sda3       -      -      -        -         -      -      -      -    ONLINE
    sdb3       -      -      -        -         -      -      -      -    ONLINE
rpool       444G  21.8G   423G        -         -    26%     4%  1.00x    ONLINE  -
  mirror-0   444G  21.8G   423G        -         -    26%  4.91%      -    ONLINE
    sda4       -      -      -        -         -      -      -      -    ONLINE
    sdb4       -      -      -        -         -      -      -      -    ONLINE
Actions #4

Updated by Jon Sherwood 4 months ago

Jon Sherwood wrote:
fixed with https://github.com/ceph/ceph/commit/0e95b27402e46c34586f460d2140af48d03fa305
But only available in 19.0?? Can we get it backported to 17.2.X?

Actions #5

Updated by Casey Bodley 4 months ago

Actions #6

Updated by Casey Bodley 4 months ago

  • Status changed from New to Pending Backport
  • Assignee set to Guillaume Abrioux
  • Tags set to ceph-volume
  • Backport set to quincy reef
  • Pull request ID set to 53327

https://tracker.ceph.com/issues/62362 was backported to quincy/reef but needs followup fix from https://github.com/ceph/ceph/pull/53327

Actions #7

Updated by Backport Bot 4 months ago

  • Copied to Backport #63919: quincy: 17.2.7 ceph-volume errors out if no valid s added
Actions #8

Updated by Backport Bot 4 months ago

  • Copied to Backport #63920: reef: 17.2.7 ceph-volume errors out if no valid s added
Actions #9

Updated by Backport Bot 4 months ago

  • Tags changed from ceph-volume to ceph-volume backport_processed
Actions

Also available in: Atom PDF