Hi @Jan,
I was able to reproduce the issue on node system 'clara013' and please find the below details,
Zap was successful:
2020-12-10 11:12:53,673.673 INFO:teuthology.orchestra.run.clara013.stderr:--> Zapping: /dev/sdd
2020-12-10 11:12:53,674.674 INFO:teuthology.orchestra.run.clara013.stderr:--> --destroy was not specified, but zapping a whole device will remove the partition table
2020-12-10 11:12:53,676.676 INFO:teuthology.orchestra.run.clara013.stderr:Running command: /usr/bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10 conv=fsync
2020-12-10 11:12:53,677.677 INFO:teuthology.orchestra.run.clara013.stderr: stderr: 10+0 records in
2020-12-10 11:12:53,678.678 INFO:teuthology.orchestra.run.clara013.stderr:10+0 records out
2020-12-10 11:12:53,679.679 INFO:teuthology.orchestra.run.clara013.stderr:10485760 bytes (10 MB, 10 MiB) copied, 0.0364648 s, 288 MB/s
2020-12-10 11:12:53,680.680 INFO:teuthology.orchestra.run.clara013.stderr:--> Zapping successful for: <Raw Device: /dev/sdd>
Adding OSD failed:
2020-12-10 11:14:46,166.166 INFO:teuthology.orchestra.run.clara013:> sudo /home/ubuntu/cephtest/cephadm --image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92f67ff0-3b01-11eb-95d0-002590fc2776 -- ceph orch daemon add osd clara013:/dev/sdd
2020-12-10 11:14:51,803.803 INFO:teuthology.orchestra.run.clara013.stderr:Error EINVAL: Traceback (most recent call last):
2020-12-10 11:14:51,804.804 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1195, in _handle_command
2020-12-10 11:14:51,805.805 INFO:teuthology.orchestra.run.clara013.stderr: return self.handle_command(inbuf, cmd)
2020-12-10 11:14:51,806.806 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
2020-12-10 11:14:51,808.808 INFO:teuthology.orchestra.run.clara013.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-12-10 11:14:51,809.809 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 332, in call
2020-12-10 11:14:51,810.810 INFO:teuthology.orchestra.run.clara013.stderr: return self.func(mgr, **kwargs)
2020-12-10 11:14:51,811.811 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
2020-12-10 11:14:51,812.812 INFO:teuthology.orchestra.run.clara013.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-12-10 11:14:51,813.813 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
2020-12-10 11:14:51,814.814 INFO:teuthology.orchestra.run.clara013.stderr: return func(*args, **kwargs)
2020-12-10 11:14:51,815.815 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/orchestrator/module.py", line 753, in _daemon_add_osd
2020-12-10 11:14:51,816.816 INFO:teuthology.orchestra.run.clara013.stderr: raise_if_exception(completion)
2020-12-10 11:14:51,817.817 INFO:teuthology.orchestra.run.clara013.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 643, in raise_if_exception
2020-12-10 11:14:51,818.818 INFO:teuthology.orchestra.run.clara013.stderr: raise e
2020-12-10 11:14:51,820.820 INFO:teuthology.orchestra.run.clara013.stderr:RuntimeError: cephadm exited with an error code: 1, stderr:/bin/podman:stderr --> passed data devices: 1 physical, 0 LVM
2020-12-10 11:14:51,821.821 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr --> relative data size: 1.0
2020-12-10 11:14:51,822.822 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
2020-12-10 11:14:51,823.823 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3d259541-1df5-4033-b1cb-2daf4e4b725b
2020-12-10 11:14:51,824.824 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr Running command: /usr/sbin/vgcreate --force --yes ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54 /dev/sdd
2020-12-10 11:14:51,825.825 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr stdout: Physical volume "/dev/sdd" successfully created.
2020-12-10 11:14:51,826.826 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr stdout: Volume group "ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54" successfully created
2020-12-10 11:14:51,827.827 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr Running command: /usr/sbin/lvcreate --yes -l 57234 -n osd-block-3d259541-1df5-4033-b1cb-2daf4e4b725b ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54
2020-12-10 11:14:51,828.828 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr stderr: Volume group "ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54" has insufficient free space (57233 extents): 57234 required.
2020-12-10 11:14:51,829.829 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
2020-12-10 11:14:51,830.830 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
2020-12-10 11:14:51,831.831 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr stderr: purged osd.0
2020-12-10 11:14:51,832.832 INFO:teuthology.orchestra.run.clara013.stderr:/bin/podman:stderr --> RuntimeError: command returned non-zero exit status: 5
2020-12-10 11:14:51,833.833 INFO:teuthology.orchestra.run.clara013.stderr:Traceback (most recent call last):
2020-12-10 11:14:51,834.834 INFO:teuthology.orchestra.run.clara013.stderr: File "<stdin>", line 6113, in <module>
2020-12-10 11:14:51,836.836 INFO:teuthology.orchestra.run.clara013.stderr: File "<stdin>", line 1300, in _infer_fsid
2020-12-10 11:14:51,837.837 INFO:teuthology.orchestra.run.clara013.stderr: File "<stdin>", line 1383, in _infer_image
2020-12-10 11:14:51,838.838 INFO:teuthology.orchestra.run.clara013.stderr: File "<stdin>", line 3613, in command_ceph_volume
2020-12-10 11:14:51,839.839 INFO:teuthology.orchestra.run.clara013.stderr: File "<stdin>", line 1062, in call_throws
2020-12-10 11:14:51,840.840 INFO:teuthology.orchestra.run.clara013.stderr:RuntimeError: Failed command: /bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest -e NODE_NAME=clara013 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -v /var/run/ceph/92f67ff0-3b01-11eb-95d0-002590fc2776:/var/run/ceph:z -v /var/log/ceph/92f67ff0-3b01-11eb-95d0-002590fc2776:/var/log/ceph:z -v /var/lib/ceph/92f67ff0-3b01-11eb-95d0-002590fc2776/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmpxfxmv55z:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpi4gkzx6y:/var/lib/ceph/bootstrap-osd/ceph.keyring:z registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest lvm batch --no-auto /dev/sdd --yes --no-systemd
[ubuntu@clara013 ~]$ sudo pvs -o all
Fmt PV UUID DevSize PV Maj Min PMdaFree PMdaSize PExtVsn 1st PE PSize PFree Used Attr Allocatable Exported Missing PE Alloc PV Tags #PMda #PMdaUse BA Start BA Size PInUse Duplicate
lvm2 j6UFyk-82Te-jRta-X7eD-MNff-GiWG-AHfgnU 223.57g /dev/sdd 8 48 508.00k 1020.00k 2 1.00m <223.57g <223.57g 0 a-- allocatable 57233 0 1 1 0 0 used
[ubuntu@clara013 ~]$ sudo vgs -o all
Fmt VG UUID VG Attr VPerms Extendable Exported Partial AllocPol Clustered Shared VSize VFree SYS ID System ID LockType VLockArgs Ext #Ext Free MaxLV MaxPV #PV #PV Missing #LV #SN Seq VG Tags VProfile #VMda #VMdaUse VMdaFree VMdaSize #VMdaCps
lvm2 yVeND2-dfQD-3ClC-Heoj-EVG2-15oo-U2bXx4 ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54 wz--n- writeable extendable normal <223.57g <223.57g 4.00m 57233 57233 0 0 1 0 0 0 1 1 1 508.00k 1020.00k unmanaged
[ubuntu@clara013 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
└─sda1 8:1 0 223.6G 0 part /
sdb 8:16 0 223.6G 0 disk
sdc 8:32 0 223.6G 0 disk
sdd 8:48 0 223.6G 0 disk
[ubuntu@clara013 ~]$ sudo lvm pvscan
PV /dev/sdd VG ceph-42377c1f-b0f3-4cbd-a928-672fe70d6c54 lvm2 [<223.57 GiB / <223.57 GiB free]
Total: 1 [<223.57 GiB] / in use: 1 [<223.57 GiB] / in no VG: 0 [0 ]