Project

General

Profile

Actions

Bug #48797

closed

lvm batch calculates wrong extends

Added by Thomas Brandstetter over 3 years ago. Updated 5 months ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

With version 15.2.8 rook-ceph-osd-prepare cannot create the configured OSD disks on the specific node because of lvm free extends miscalculation. this works fine under version 15.2.7.

Log:

2020-12-23 13:05:42.102031 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdb --report
2020-12-23 13:05:44.216589 D | exec: --> DEPRECATION NOTICE
2020-12-23 13:05:44.220847 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-12-23 13:05:44.220916 D | exec: --> The Pacific release will change the default to --no-auto
2020-12-23 13:05:44.220934 D | exec: --> passed data devices: 1 physical, 0 LVM
2020-12-23 13:05:44.220946 D | exec: --> relative data size: 1.0
2020-12-23 13:05:44.220998 D | exec:
2020-12-23 13:05:44.221014 D | exec: Total OSDs: 1
2020-12-23 13:05:44.221025 D | exec:
2020-12-23 13:05:44.221044 D | exec:   Type            Path                                                    LV Size         % of device
2020-12-23 13:05:44.221057 D | exec: ----------------------------------------------------------------------------------------------------
2020-12-23 13:05:44.221069 D | exec:   data            /dev/sdb                                                931.51 GB       100.00%
2020-12-23 13:05:44.346780 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdb
2020-12-23 13:05:50.508209 D | exec: --> DEPRECATION NOTICE
2020-12-23 13:05:50.508319 D | exec: --> You are using the legacy automatic disk sorting behavior
2020-12-23 13:05:50.508335 D | exec: --> The Pacific release will change the default to --no-auto
2020-12-23 13:05:50.508352 D | exec: --> passed data devices: 1 physical, 0 LVM
2020-12-23 13:05:50.508365 D | exec: --> relative data size: 1.0
2020-12-23 13:05:50.508377 D | exec: Running command: /usr/bin/ceph-authtool --gen-print-key
2020-12-23 13:05:50.508401 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d85f60f4-7590-4938-b466-cbfcc80437b6
2020-12-23 13:05:50.508416 D | exec: Running command: /usr/sbin/vgcreate --force --yes ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817 /dev/sdb
2020-12-23 13:05:50.508435 D | exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
2020-12-23 13:05:50.508447 D | exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
2020-12-23 13:05:50.508458 D | exec:  stdout: Physical volume "/dev/sdb" successfully created.
2020-12-23 13:05:50.508474 D | exec:  stdout: Volume group "ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817" successfully created
2020-12-23 13:05:50.508488 D | exec: Running command: /usr/sbin/lvcreate --yes -l 238467 -n osd-block-d85f60f4-7590-4938-b466-cbfcc80437b6 ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817
2020-12-23 13:05:50.508501 D | exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
2020-12-23 13:05:50.508511 D | exec:  stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
2020-12-23 13:05:50.508528 D | exec:  stderr: Volume group "ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817" has insufficient free space (238460 extents): 238467 required.
2020-12-23 13:05:50.508540 D | exec: --> Was unable to complete a new OSD, will rollback changes
2020-12-23 13:05:50.508552 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
2020-12-23 13:05:50.508563 D | exec:  stderr: purged osd.0
2020-12-23 13:05:50.518811 D | exec: Traceback (most recent call last):
2020-12-23 13:05:50.524201 D | exec:   File "/usr/sbin/ceph-volume", line 11, in <module>
2020-12-23 13:05:50.524312 D | exec:     load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
2020-12-23 13:05:50.524333 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
2020-12-23 13:05:50.524385 D | exec:     self.main(self.argv)
2020-12-23 13:05:50.524400 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
2020-12-23 13:05:50.524412 D | exec:     return f(*a, **kw)
2020-12-23 13:05:50.524424 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
2020-12-23 13:05:50.524436 D | exec:     terminal.dispatch(self.mapper, subcommand_args)
2020-12-23 13:05:50.524475 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2020-12-23 13:05:50.524491 D | exec:     instance.main()
2020-12-23 13:05:50.524503 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
2020-12-23 13:05:50.524515 D | exec:     terminal.dispatch(self.mapper, self.argv)
2020-12-23 13:05:50.524526 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
2020-12-23 13:05:50.524707 D | exec:     instance.main()
2020-12-23 13:05:50.525012 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
2020-12-23 13:05:50.533149 D | exec:     return func(*a, **kw)
2020-12-23 13:05:50.533438 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 415, in main
2020-12-23 13:05:50.538457 D | exec:     self._execute(plan)
2020-12-23 13:05:50.538700 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 431, in _execute
2020-12-23 13:05:50.538727 D | exec:     p.safe_prepare(argparse.Namespace(**args))
2020-12-23 13:05:50.542084 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
2020-12-23 13:05:50.542839 D | exec:     self.prepare()
2020-12-23 13:05:50.543894 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
2020-12-23 13:05:50.549727 D | exec:     return func(*a, **kw)
2020-12-23 13:05:50.550450 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
2020-12-23 13:05:50.550804 D | exec:     block_lv = self.prepare_data_device('block', osd_fsid)
2020-12-23 13:05:50.551087 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device
2020-12-23 13:05:50.551385 D | exec:     **kwargs)
2020-12-23 13:05:50.551587 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 949, in create_lv
2020-12-23 13:05:50.551741 D | exec:     process.run(command)
2020-12-23 13:05:50.551876 D | exec:   File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 153, in run
2020-12-23 13:05:50.552009 D | exec:     raise RuntimeError(msg)
2020-12-23 13:05:50.552131 D | exec: RuntimeError: command returned non-zero exit status: 5
2020-12-23 13:05:50.658609 E | cephosd: [2020-12-23 13:05:43,321][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdb --report
[2020-12-23 13:05:43,326][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-12-23 13:05:43,385][ceph_volume.process][INFO  ] stdout /dev/loop0 /dev/loop0 loop
[2020-12-23 13:05:43,385][ceph_volume.process][INFO  ] stdout /dev/loop1 /dev/loop1 loop
[2020-12-23 13:05:43,385][ceph_volume.process][INFO  ] stdout /dev/loop2 /dev/loop2 loop
[2020-12-23 13:05:43,385][ceph_volume.process][INFO  ] stdout /dev/loop3 /dev/loop3 loop
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/loop4 /dev/loop4 loop
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/loop5 /dev/loop5 loop
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/loop6 /dev/loop6 loop
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/loop7 /dev/loop7 loop
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/sda   /dev/sda   disk
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/sda1  /dev/sda1  part
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/sda2  /dev/sda2  part
[2020-12-23 13:05:43,386][ceph_volume.process][INFO  ] stdout /dev/sdb   /dev/sdb   disk
[2020-12-23 13:05:43,398][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-12-23 13:05:43,602][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:43,603][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:43,605][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdb
[2020-12-23 13:05:43,633][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="ASM105x         " SIZE="931.5G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-12-23 13:05:43,634][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/sdb
[2020-12-23 13:05:43,674][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdb
[2020-12-23 13:05:43,888][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:43,888][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:43,889][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdb".
[2020-12-23 13:05:43,890][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdb
[2020-12-23 13:05:44,005][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdb: (2) No such file or directory
[2020-12-23 13:05:44,007][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdb
[2020-12-23 13:05:44,166][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdb: (2) No such file or directory
[2020-12-23 13:05:44,168][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sdb
[2020-12-23 13:05:44,205][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-35000000000000001 /dev/disk/by-id/wwn-0x5000000000000001 /dev/disk/by-id/scsi-SASMedia_ASM105x_4179649D1391 /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0
[2020-12-23 13:05:44,207][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdb
[2020-12-23 13:05:44,207][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb2/2-2/2-2:1.0/host1/target1:0:0/1:0:0:0/block/sdb
[2020-12-23 13:05:44,207][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-12-23 13:05:44,207][ceph_volume.process][INFO  ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_BUS=scsi
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_MODEL=ASM105x
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=ASM105x\x20\x20\x20\x20\x20\x20\x20\x20\x20
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_PATH=platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=platform-fd500000_pcie-pci-0000_01_00_0-usb-0_2_1_0-scsi-0_0_0_0
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_REVISION=0
[2020-12-23 13:05:44,208][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_SERIAL=35000000000000001
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=5000000000000001
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_VENDOR=ASMedia
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ASMedia\x20
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_WWN=0x5000000000000001
[2020-12-23 13:05:44,209][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x5000000000000001
[2020-12-23 13:05:44,210][ceph_volume.process][INFO  ] stdout MAJOR=8
[2020-12-23 13:05:44,210][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-12-23 13:05:44,210][ceph_volume.process][INFO  ] stdout MPATH_SBIN_PATH=/sbin
[2020-12-23 13:05:44,210][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=5000000000000001
[2020-12-23 13:05:44,210][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=4179649D1391
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_MODEL=ASM105x
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=ASM105x\x20\x20\x20\x20\x20\x20\x20\x20\x20
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ASMedia
[2020-12-23 13:05:44,211][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ASMedia\x20
[2020-12-23 13:05:44,212][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-12-23 13:05:44,212][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-12-23 13:05:44,212][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=3826485
[2020-12-23 13:05:44,213][ceph_volume.devices.lvm.batch][WARNING] DEPRECATION NOTICE
[2020-12-23 13:05:44,214][ceph_volume.devices.lvm.batch][WARNING] You are using the legacy automatic disk sorting behavior
[2020-12-23 13:05:44,214][ceph_volume.devices.lvm.batch][WARNING] The Pacific release will change the default to --no-auto
[2020-12-23 13:05:44,214][ceph_volume.devices.lvm.batch][DEBUG ] passed data devices: 1 physical, 0 LVM
[2020-12-23 13:05:44,215][ceph_volume.devices.lvm.batch][DEBUG ] relative data size: 1.0
[2020-12-23 13:05:45,599][ceph_volume.main][INFO  ] Running command: ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 /dev/sdb
[2020-12-23 13:05:45,605][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE
[2020-12-23 13:05:45,669][ceph_volume.process][INFO  ] stdout /dev/loop0 /dev/loop0 loop
[2020-12-23 13:05:45,670][ceph_volume.process][INFO  ] stdout /dev/loop1 /dev/loop1 loop
[2020-12-23 13:05:45,670][ceph_volume.process][INFO  ] stdout /dev/loop2 /dev/loop2 loop
[2020-12-23 13:05:45,670][ceph_volume.process][INFO  ] stdout /dev/loop3 /dev/loop3 loop
[2020-12-23 13:05:45,670][ceph_volume.process][INFO  ] stdout /dev/loop4 /dev/loop4 loop
[2020-12-23 13:05:45,670][ceph_volume.process][INFO  ] stdout /dev/loop5 /dev/loop5 loop
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/loop6 /dev/loop6 loop
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/loop7 /dev/loop7 loop
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/sda   /dev/sda   disk
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/sda1  /dev/sda1  part
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/sda2  /dev/sda2  part
[2020-12-23 13:05:45,671][ceph_volume.process][INFO  ] stdout /dev/sdb   /dev/sdb   disk
[2020-12-23 13:05:45,687][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/sdb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-12-23 13:05:45,923][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:45,924][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:45,925][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdb
[2020-12-23 13:05:45,949][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="ASM105x         " SIZE="931.5G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-12-23 13:05:45,951][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -p /dev/sdb
[2020-12-23 13:05:45,990][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdb
[2020-12-23 13:05:46,194][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:46,195][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:46,195][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdb".
[2020-12-23 13:05:46,197][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdb
[2020-12-23 13:05:46,295][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdb: (2) No such file or directory
[2020-12-23 13:05:46,297][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-bluestore-tool show-label --dev /dev/sdb
[2020-12-23 13:05:46,390][ceph_volume.process][INFO  ] stderr unable to read label for /dev/sdb: (2) No such file or directory
[2020-12-23 13:05:46,392][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/sdb
[2020-12-23 13:05:46,421][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-id/scsi-SASMedia_ASM105x_4179649D1391 /dev/disk/by-id/wwn-0x5000000000000001 /dev/disk/by-id/scsi-35000000000000001 /dev/disk/by-path/platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0
[2020-12-23 13:05:46,421][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdb
[2020-12-23 13:05:46,422][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb2/2-2/2-2:1.0/host1/target1:0:0/1:0:0:0/block/sdb
[2020-12-23 13:05:46,422][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2020-12-23 13:05:46,422][ceph_volume.process][INFO  ] stdout DM_MULTIPATH_DEVICE_PATH=0
[2020-12-23 13:05:46,422][ceph_volume.process][INFO  ] stdout ID_BUS=scsi
[2020-12-23 13:05:46,422][ceph_volume.process][INFO  ] stdout ID_MODEL=ASM105x
[2020-12-23 13:05:46,423][ceph_volume.process][INFO  ] stdout ID_MODEL_ENC=ASM105x\x20\x20\x20\x20\x20\x20\x20\x20\x20
[2020-12-23 13:05:46,423][ceph_volume.process][INFO  ] stdout ID_PATH=platform-fd500000.pcie-pci-0000:01:00.0-usb-0:2:1.0-scsi-0:0:0:0
[2020-12-23 13:05:46,423][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=platform-fd500000_pcie-pci-0000_01_00_0-usb-0_2_1_0-scsi-0_0_0_0
[2020-12-23 13:05:46,423][ceph_volume.process][INFO  ] stdout ID_REVISION=0
[2020-12-23 13:05:46,423][ceph_volume.process][INFO  ] stdout ID_SCSI=1
[2020-12-23 13:05:46,424][ceph_volume.process][INFO  ] stdout ID_SCSI_INQUIRY=1
[2020-12-23 13:05:46,424][ceph_volume.process][INFO  ] stdout ID_SERIAL=35000000000000001
[2020-12-23 13:05:46,424][ceph_volume.process][INFO  ] stdout ID_SERIAL_SHORT=5000000000000001
[2020-12-23 13:05:46,424][ceph_volume.process][INFO  ] stdout ID_TYPE=disk
[2020-12-23 13:05:46,424][ceph_volume.process][INFO  ] stdout ID_VENDOR=ASMedia
[2020-12-23 13:05:46,425][ceph_volume.process][INFO  ] stdout ID_VENDOR_ENC=ASMedia\x20
[2020-12-23 13:05:46,425][ceph_volume.process][INFO  ] stdout ID_WWN=0x5000000000000001
[2020-12-23 13:05:46,425][ceph_volume.process][INFO  ] stdout ID_WWN_WITH_EXTENSION=0x5000000000000001
[2020-12-23 13:05:46,425][ceph_volume.process][INFO  ] stdout MAJOR=8
[2020-12-23 13:05:46,425][ceph_volume.process][INFO  ] stdout MINOR=16
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout MPATH_SBIN_PATH=/sbin
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_IDENT_LUN_NAA_REG=5000000000000001
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_IDENT_SERIAL=4179649D1391
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_MODEL=ASM105x
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_MODEL_ENC=ASM105x\x20\x20\x20\x20\x20\x20\x20\x20\x20
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_REVISION=0
[2020-12-23 13:05:46,426][ceph_volume.process][INFO  ] stdout SCSI_TPGS=0
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout SCSI_TYPE=disk
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout SCSI_VENDOR=ASMedia
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout SCSI_VENDOR_ENC=ASMedia\x20
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2020-12-23 13:05:46,427][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=3826485
[2020-12-23 13:05:46,429][ceph_volume.devices.lvm.batch][WARNING] DEPRECATION NOTICE
[2020-12-23 13:05:46,429][ceph_volume.devices.lvm.batch][WARNING] You are using the legacy automatic disk sorting behavior
[2020-12-23 13:05:46,429][ceph_volume.devices.lvm.batch][WARNING] The Pacific release will change the default to --no-auto
[2020-12-23 13:05:46,430][ceph_volume.devices.lvm.batch][DEBUG ] passed data devices: 1 physical, 0 LVM
[2020-12-23 13:05:46,430][ceph_volume.devices.lvm.batch][DEBUG ] relative data size: 1.0
[2020-12-23 13:05:46,431][ceph_volume.api.lvm][WARNING] device is not part of ceph: None
[2020-12-23 13:05:46,433][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph-authtool --gen-print-key
[2020-12-23 13:05:46,497][ceph_volume.process][INFO  ] stdout AQCqQONfChhhHRAAfQIvy+hSD5+6lJvgI62CXg==
[2020-12-23 13:05:46,499][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d85f60f4-7590-4938-b466-cbfcc80437b6
[2020-12-23 13:05:48,209][ceph_volume.process][INFO  ] stdout 0
[2020-12-23 13:05:48,210][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdb
[2020-12-23 13:05:48,237][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="ASM105x         " SIZE="931.5G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-12-23 13:05:48,239][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdb
[2020-12-23 13:05:48,266][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="ASM105x         " SIZE="931.5G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-12-23 13:05:48,268][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 931.51 GB
[2020-12-23 13:05:48,269][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /dev/sdb
[2020-12-23 13:05:48,474][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:48,475][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:48,475][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdb".
[2020-12-23 13:05:48,477][ceph_volume.process][INFO  ] Running command: /usr/sbin/vgcreate --force --yes ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817 /dev/sdb
[2020-12-23 13:05:48,499][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:48,500][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:48,697][ceph_volume.process][INFO  ] stdout Physical volume "/dev/sdb" successfully created.
[2020-12-23 13:05:48,717][ceph_volume.process][INFO  ] stdout Volume group "ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817" successfully created
[2020-12-23 13:05:48,765][ceph_volume.process][INFO  ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2020-12-23 13:05:48,970][ceph_volume.process][INFO  ] stdout ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817";"1";"0";"wz--n-";"238460";"238460";"4194304
[2020-12-23 13:05:48,971][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:48,972][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:48,973][ceph_volume.api.lvm][DEBUG ] size was passed: 931.51 GB -> 238467
[2020-12-23 13:05:48,974][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvcreate --yes -l 238467 -n osd-block-d85f60f4-7590-4938-b466-cbfcc80437b6 ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817
[2020-12-23 13:05:48,997][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
[2020-12-23 13:05:48,998][ceph_volume.process][INFO  ] stderr Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
[2020-12-23 13:05:49,139][ceph_volume.process][INFO  ] stderr Volume group "ceph-6a7578eb-fe6c-46b1-97e7-d841a6019817" has insufficient free space (238460 extents): 238467 required.
[2020-12-23 13:05:49,187][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
    block_lv = self.prepare_data_device('block', osd_fsid)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device
    **kwargs)
  File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 949, in create_lv
    process.run(command)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 153, in run
    raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
[2020-12-23 13:05:49,192][ceph_volume.devices.lvm.prepare][INFO  ] will rollback OSD ID creation
[2020-12-23 13:05:49,193][ceph_volume.process][INFO  ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
[2020-12-23 13:05:50,447][ceph_volume.process][INFO  ] stderr purged osd.0
[2020-12-23 13:05:50,503][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 415, in main
    self._execute(plan)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 431, in _execute
    p.safe_prepare(argparse.Namespace(**args))
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 363, in prepare
    block_lv = self.prepare_data_device('block', osd_fsid)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 221, in prepare_data_device
    **kwargs)
  File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 949, in create_lv
    process.run(command)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 153, in run
    raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5
failed to configure devices: failed to initialize devices: failed ceph-volume: exit status 1

Actions #1

Updated by Thomas Brandstetter over 3 years ago

Seems like a duplicate of https://tracker.ceph.com/issues/47758

Btw. I tried this in a rook-ceph cluster with Helm3 (newest release).

Actions #2

Updated by Konstantin Shalygin 5 months ago

  • Status changed from New to Duplicate

Closed as dup of #47758 (now resolved)

Actions

Also available in: Atom PDF