Project

General

Profile

Actions

Bug #50377

open

ceph-volume osd create lv failure

Added by 伟 宋 about 3 years ago. Updated almost 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2021-04-15 03:56:23.391697 D | exec:
2021-04-15 03:56:23.391703 D | exec: Total OSDs: 3
2021-04-15 03:56:23.391707 D | exec:
2021-04-15 03:56:23.391714 D | exec: Type Path LV Size % of device
2021-04-15 03:56:23.391718 D | exec: ----------------------------------------------------------------------------------------------------
2021-04-15 03:56:23.391721 D | exec: data /dev/sdc 1.75 TB 100.00%
2021-04-15 03:56:23.391726 D | exec: block_db /dev/sdb 32.00 GB 0.00%
2021-04-15 03:56:23.391732 D | exec: ----------------------------------------------------------------------------------------------------
2021-04-15 03:56:23.391736 D | exec: data /dev/sdf 1.75 TB 100.00%
2021-04-15 03:56:23.391739 D | exec: block_db /dev/sdb 32.00 GB 0.00%
2021-04-15 03:56:23.391742 D | exec: ----------------------------------------------------------------------------------------------------
2021-04-15 03:56:23.391750 D | exec: data /dev/sdd 1.75 TB 100.00%
2021-04-15 03:56:23.391753 D | exec: block_db /dev/sdb 32.00 GB 0.00%
2021-04-15 03:56:23.407468 D | exec: Running command: stdbuf oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 --crush-device-class ssd --block-db-size 34359738368 /dev/sdc /dev/sdf /dev/sdd --db-devices /dev/sdb --report --format json
2021-04-15 03:56:26.762936 D | cephosd: ceph-volume reports: [{"block_db": "/dev/sdb", "encryption": "None", "data": "/dev/sdc", "data_size": "1.75 TB", "block_db_size": "32.00 GB"}, {"block_db": "/dev/sdb", "encryption": "None", "data": "/dev/sdf", "data_size": "1.75 TB", "block_db_size": "32.00 GB"}, {"block_db": "/dev/sdb", "encryption": "None", "data": "/dev/sdd", "data_size": "1.75 TB", "block_db_size": "32.00 GB"}]
2021-04-15 03:56:26.763112 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm batch --prepare --bluestore --yes --osds-per-device 1 --crush-device-class ssd --block-db-size 34359738368 /dev/sdc /dev/sdf /dev/sdd --db-devices /dev/sdb
2021-04-15 03:56:30.087157 D | exec: -
> passed data devices: 3 physical, 0 LVM
2021-04-15 03:56:30.087385 D | exec: --> relative data size: 1.0
2021-04-15 03:56:30.087980 D | exec: --> passed block_db devices: 1 physical, 0 LVM
2021-04-15 03:56:30.089324 D | exec: Running command: /usr/bin/ceph-authtool --gen-print-key
2021-04-15 03:56:30.210751 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring i - osd new fc035151-98ec-43a2-9dea-9e25184d2aa0
2021-04-15 03:56:31.212137 D | exec: Running command: /usr/sbin/vgcreate --force --yes ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea /dev/sdc
2021-04-15 03:56:31.283013 D | exec: stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
2021-04-15 03:56:31.283999 D | exec: stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
2021-04-15 03:56:31.285178 D | exec: stderr: WARNING: Failed to connect to lvmetad. Falling back to device scanning.
2021-04-15 03:56:31.358620 D | exec: stdout: Physical volume "/dev/sdc" successfully created.
2021-04-15 03:56:31.386433 D | exec: stdout: Volume group "ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea" successfully created
2021-04-15 03:56:31.604681 D | exec: Running command: /usr/sbin/lvcreate --yes -l 457855 -n osd-block-fc035151-98ec-43a2-9dea-9e25184d2aa0 ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea
2021-04-15 03:56:31.676700 D | exec: stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
2021-04-15 03:56:31.677576 D | exec: stderr: Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
2021-04-15 03:56:31.679757 D | exec: stderr: WARNING: Failed to connect to lvmetad. Falling back to device scanning.
2021-04-15 03:56:31.698186 D | exec: stderr: Volume group "ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea" has insufficient free space (457854 extents): 457855 required.
2021-04-15 03:56:31.768772 D | exec: -
> Was unable to complete a new OSD, will rollback changes
2021-04-15 03:56:31.769261 D | exec: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
2021-04-15 03:56:32.429305 D | exec: stderr: purged osd.0
2021-04-15 03:56:32.459448 D | exec: Traceback (most recent call last):

root@storage01:~# pvdisplay /dev/sdc
--- Physical volume ---
PV Name /dev/sdc
VG Name ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea
PV Size <1.75 TiB / not usable <4.34 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 457854
Free PE 457854
Allocated PE 0
PV UUID 11zucc-rm5G-WVbG-Gywx-2aeA-pftN-mlqeus

root@storage01:~# pvdisplay /dev/sdd
--- Physical volume ---
PV Name /dev/sdd
VG Name ceph-block-cca4276e-2252-45da-8c77-f4083daf0bb3
PV Size <2.73 TiB / not usable 4.46 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 715396
Free PE 0
Allocated PE 715396
PV UUID mTRNtw-ZQB6-QGgd-sjxH-gApt-KU9n-wX1bTK

root@storage01:~# pvdisplay /dev/sdc
--- Physical volume ---
PV Name /dev/sdc
VG Name ceph-block-b6fe5afa-6c6f-42f6-bf26-624619fc1461
PV Size <1.75 TiB / not usable <4.34 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 457854
Free PE 0
Allocated PE 457854
PV UUID EP8Gy3-zSd0-l6zZ-NpnK-UnBF-58FE-uaKJBw

root@storage01:~# pvdisplay /dev/sdb
--- Physical volume ---
PV Name /dev/sdb
VG Name ceph-block-dbs-81bd8f56-3c47-4c0a-9378-c17a0306ffbb
PV Size 894.25 GiB / not usable <3.34 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 228928
Free PE 118592
Allocated PE 110336
PV UUID PvZmj4-SCQK-n5lK-vEu9-tQQS-An2w-SyGJ1s

The available PE for vg was found to be larger than 4M,but c-v is calculated based on size。

Actions #1

Updated by 伟 宋 about 3 years ago

Disk /dev/sdg: 1.8 TiB, 1920383410176 bytes, 3750748848 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

int(1920383410176.0/(4096*1024))

457855

    if size:
        extents = vg.bytes_to_extents(size)
        logger.debug('size was passed: {} -> {}'.format(size, extents))
    elif slots and not extents:
        extents = vg.slots_to_extents(slots)
        logger.debug('slots was passed: {} -> {}'.format(slots, extents))

    if extents:
        command = [
            'lvcreate',
            '--yes',
            '-l',
            '{}'.format(extents),
            '-n', name, vg.vg_name
        ]

Actions #2

Updated by 伟 宋 about 3 years ago

2021-04-15 03:56:31.698186 D | exec: stderr: Volume group "ceph-78171a91-95a6-4e03-a12a-d9b4f46a98ea" has insufficient free space (457854 extents): 457855 required.

vg & pv Total PE 457854

But the value calculated based on capacity is: 457855

Actions #3

Updated by Loïc Dachary about 3 years ago

  • Target version changed from v14.2.20 to v14.2.21
Actions #4

Updated by Loïc Dachary almost 3 years ago

  • Target version deleted (v14.2.21)
Actions

Also available in: Atom PDF