Project

General

Profile

Actions

Bug #47758

closed

fail to create OSDs because the requested extent is too large

Added by Kiefer Chang over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific,octopus,nautilus
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This was observed in this e2e test failure: https://tracker.ceph.com/issues/47742

Some context about this e2d test:
- We need additional disks to test cephadm OSD creation feature. Because there are no sparse disks on smithi node:
- 3 sparse files are created and exported as iSCSI target luns (LIO).
- Login the target, 3 disks appear on host.
- Dashboard then ask cephadm to create 3 new OSDs with these disks.

[2020-10-06 06:55:10,428][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm batch --no-auto /dev/sdf --yes --no-systemd
...

[2020-10-06 06:55:10,958][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE
,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /dev/sdf
[2020-10-06 06:55:10,961][ceph_volume.process][INFO  ] stdout NAME="sdf" KNAME="sdf" MAJ:MIN="8:80" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="lun1            " SIZE="15G
" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DI
SC-ZERO="0" PKNAME="" PARTLABEL="" 
[2020-10-06 06:55:10,961][ceph_volume.devices.lvm.prepare][DEBUG ] data device size: 15.00 GB
[2020-10-06 06:55:10,961][ceph_volume.process][INFO  ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_ex
tent_count,vg_free_count,vg_extent_size /dev/sdf
[2020-10-06 06:55:10,973][ceph_volume.process][INFO  ] stderr Failed to find physical volume "/dev/sdf".
[2020-10-06 06:55:10,974][ceph_volume.process][INFO  ] Running command: /usr/sbin/vgcreate --force --yes ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7 /dev/sdf
[2020-10-06 06:55:10,986][ceph_volume.process][INFO  ] stdout Physical volume "/dev/sdf" successfully created.
[2020-10-06 06:55:10,999][ceph_volume.process][INFO  ] stdout Volume group "ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7" successfully created
[2020-10-06 06:55:11,007][ceph_volume.process][INFO  ] Running command: /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --separator=";" -S vg_name=ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7 -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2020-10-06 06:55:11,026][ceph_volume.process][INFO  ] stdout ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7";"1";"0";"wz--n-";"3838";"3838";"4194304
[2020-10-06 06:55:11,026][ceph_volume.api.lvm][DEBUG ] size was passed: 15.00 GB -> 3839
[2020-10-06 06:55:11,027][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvcreate --yes -l 3839 -n osd-block-21867cd0-f8ad-4f5e-ad41-18510386c0c5 ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7
[2020-10-06 06:55:11,062][ceph_volume.process][INFO  ] stderr Volume group "ceph-755d5f48-33f7-4bc2-b909-d8e2bea211d7" has insufficient free space (3838 extents): 3839 required.
[2020-10-06 06:55:11,066][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 250, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 361, in prepare
    block_lv = self.prepare_data_device('block', osd_fsid)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in prepare_data_device
    **kwargs)
  File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 949, in create_lv
    process.run(command)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 153, in run
    raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 5

The lvcreate command asks for one more extent (3838 vs 3839 in this example).


Related issues 5 (0 open5 closed)

Related to Dashboard - Bug #47742: cephadm/test_dashboard_e2e.sh: OSDs are not createdResolvedKiefer Chang

Actions
Has duplicate ceph-volume - Bug #48383: OSD creation fails because volume group has insufficient free space to place a logical volumeDuplicateJuan Miguel Olmo Martínez

Actions
Copied to ceph-volume - Backport #49140: nautilus: fail to create OSDs because the requested extent is too largeResolvedJan FajerskiActions
Copied to ceph-volume - Backport #49141: octopus: fail to create OSDs because the requested extent is too largeResolvedJan FajerskiActions
Copied to ceph-volume - Backport #49142: pacific: fail to create OSDs because the requested extent is too largeResolvedJan FajerskiActions
Actions

Also available in: Atom PDF