Project

General

Profile

Actions

Bug #48383

closed

OSD creation fails because volume group has insufficient free space to place a logical volume

Added by Juan Miguel Olmo Martínez over 3 years ago. Updated over 3 years ago.

Status:
Duplicate
Priority:
Normal
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Error when trying to create an OSD using Ceph orchestrator. After several test we were able to discover that the problem is because ceph-volume tries to create a logical volume 1 Mbyte bigger than the volume group where it must be placed.

Zapping the device "/dev/sdc"
--------------------------------

> [ubuntu@clara010 ~]$ 
> [ubuntu@clara010 ~]$ sudo /home/ubuntu/cephtest/cephadm --image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a994402-2fac-11eb-95d0-002590fc2776 -- ceph orch device zap clara010 /dev/sdc --force
> /bin/podman:stderr --> Zapping: /dev/sdc
> /bin/podman:stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
> /bin/podman:stderr  stderr: 10+0 records in
> /bin/podman:stderr 10+0 records out
> /bin/podman:stderr 10485760 bytes (10 MB, 10 MiB) copied, 0.0356284 s, 294 MB/s
> /bin/podman:stderr --> Zapping successful for: <Raw Device: /dev/sdc>

Run pvscan to make sure physical volume doesn't exist
------------------------------------------------------

> [ubuntu@clara010 ~]$ sudo lvm
> lvm> pvscan
>   PV /dev/sdb   VG ceph-2d288c9e-e39b-4bf3-8d8c-95137e24870a   lvm2 [<223.57 GiB / <223.57 GiB free]
>   Total: 1 [<223.57 GiB] / in use: 1 [<223.57 GiB] / in no VG: 0 [0   ]
> lvm> exit
>   Exiting.

Add OSD daemon using specific device from host
-----------------------------------------------


> [ubuntu@clara010 ~]$ sudo /home/ubuntu/cephtest/cephadm --image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a994402-2fac-11eb-95d0-002590fc2776 -- ceph orch daemon add osd clara010:/dev/sdc
> Error EINVAL: Traceback (most recent call last):
>   File "/usr/share/ceph/mgr/mgr_module.py", line 1195, in _handle_command
>     return self.handle_command(inbuf, cmd)
>   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
>     return dispatch[cmd['prefix']].call(self, cmd, inbuf)
>   File "/usr/share/ceph/mgr/mgr_module.py", line 332, in call
>     return self.func(mgr, **kwargs)
>   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
>     wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
>   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
>     return func(*args, **kwargs)
>   File "/usr/share/ceph/mgr/orchestrator/module.py", line 753, in _daemon_add_osd
>     raise_if_exception(completion)
>   File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 643, in raise_if_exception
>     raise e
> RuntimeError: cephadm exited with an error code: 1, stderr:/bin/podman:stderr --> passed data devices: 1 physical, 0 LVM
> /bin/podman:stderr --> relative data size: 1.0
> /bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
> /bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 124ea5a7-cfe6-4db6-bf84-c0b32fea5588
> /bin/podman:stderr Running command: /usr/sbin/vgcreate --force --yes ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42 /dev/sdc
> /bin/podman:stderr  stdout: Physical volume "/dev/sdc" successfully created.
> /bin/podman:stderr  stdout: Volume group "ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42" successfully created
> /bin/podman:stderr Running command: /usr/sbin/lvcreate --yes -l 57234 -n osd-block-124ea5a7-cfe6-4db6-bf84-c0b32fea5588 ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42
> /bin/podman:stderr  stderr: Volume group "ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42" has insufficient free space (57233 extents): 57234 required.
> /bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
> /bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.5 --yes-i-really-mean-it
> /bin/podman:stderr  stderr: purged osd.5
> /bin/podman:stderr -->  RuntimeError: command returned non-zero exit status: 5
> Traceback (most recent call last):
>   File "<stdin>", line 6113, in <module>
>   File "<stdin>", line 1300, in _infer_fsid
>   File "<stdin>", line 1383, in _infer_image
>   File "<stdin>", line 3613, in command_ceph_volume
>   File "<stdin>", line 1062, in call_throws
> RuntimeError: Failed command: /bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest -e NODE_NAME=clara010 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/2a994402-2fac-11eb-95d0-002590fc2776:/var/run/ceph:z -v /var/log/ceph/2a994402-2fac-11eb-95d0-002590fc2776:/var/log/ceph:z -v /var/lib/ceph/2a994402-2fac-11eb-95d0-002590fc2776/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmptu90hf27:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpscafm087:/var/lib/ceph/bootstrap-osd/ceph.keyring:z registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest lvm batch --no-auto /dev/sdc --yes --no-systemd

Previous OSD creation command created PV "/dev/sdc:ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42"
----------------------------------------------------------------------------------------------

> [ubuntu@clara010 ~]$ sudo lvm
> lvm> pvscan 
>   PV /dev/sdc   VG ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42   lvm2 [<223.57 GiB / <223.57 GiB free]
>   PV /dev/sdb   VG ceph-2d288c9e-e39b-4bf3-8d8c-95137e24870a   lvm2 [<223.57 GiB / <223.57 GiB free]
>   Total: 2 [447.13 GiB] / in use: 2 [447.13 GiB] / in no VG: 0 [0   ]
> lvm> exit
>   Exiting.
>

Manually tried lvcreate command to create lvm resulted in error as below
-------------------------------------------------------------------------

> [ubuntu@clara010 ~]$ sudo /usr/sbin/lvcreate --yes -l 57234 -n osd-block-124ea5a7-cfe6-4db6-bf84-c0b32fea5588 ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42
>   Volume group "ceph-ebbcc39c-ce15-4e0f-95cc-2308ec8fca42" has insufficient free space (57233 extents): 57234 required.

Executed add osd command again, which worked successfully
-----------------------------------------------------------

> [ubuntu@clara010 ~]$ sudo /home/ubuntu/cephtest/cephadm --image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a994402-2fac-11eb-95d0-002590fc2776 -- ceph orch daemon add osd clara010:/dev/sdc
> Created osd(s) 5 on host 'clara010'

Related issues 1 (0 open1 closed)

Is duplicate of ceph-volume - Bug #47758: fail to create OSDs because the requested extent is too largeResolvedJan Fajerski

Actions
Actions #1

Updated by Juan Miguel Olmo Martínez over 3 years ago

  • Pull request ID set to 38335
Actions #2

Updated by Jan Fajerski over 3 years ago

  • Has duplicate Bug #47758: fail to create OSDs because the requested extent is too large added
Actions #3

Updated by Jan Fajerski over 3 years ago

  • Status changed from New to Duplicate
  • Pull request ID deleted (38335)
Actions #4

Updated by Jan Fajerski over 3 years ago

  • Has duplicate deleted (Bug #47758: fail to create OSDs because the requested extent is too large)
Actions #5

Updated by Jan Fajerski over 3 years ago

  • Is duplicate of Bug #47758: fail to create OSDs because the requested extent is too large added
Actions

Also available in: Atom PDF