Actions
Bug #47502
closedceph-volume lvm batch race condition
Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
2 - major
Reviewed:
Description
when usng ceph-volume to create the osd, some ssd are failing to get the correct rotational state.
after some debugging with @Guillaume Abrioux we are pretty sure it's a race condition.
I added some prints in that function and ran the report a few times.
https://github.com/ceph/ceph/blob/master/src/ceph-volume/ceph_volume/util/device.py#L295-L302
rotational state of the devices. 9496 sys_api: 0 104 sys_api: 1 9580 disk_api: 0 20 disk_api: 1
after some more digging i see that udevadm monitor gets triggered at every run of ceph-volume.
issue with ceph-ansible:
https://github.com/ceph/ceph-ansible/issues/5727
# ceph-volume --cluster ceph lvm batch --bluestore --yes --dmcrypt /dev/sdf /dev/sdo /dev/sdw /dev/sdd /dev/sdm /dev/sdu /dev/sdb /dev/sdk /dev/sds /dev/sdi /dev/sdq /dev/sdg /dev/sdx /dev/sde /dev/sdn /dev/sdv /dev/sdc /dev/sdl /dev/sdt /dev/sda /dev/sdj /dev/sdr /dev/sdh /dev/sdp --report Total OSDs: 24 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdf 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdo 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdw 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdd 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdm 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdu 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdb 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdk 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sds 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdi 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdq 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdg 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdx 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sde 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdn 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdv 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdc 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdl 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdt 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sda 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdj 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdr 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdh 3.49 TB 100.0% ---------------------------------------------------------------------------------------------------- [data] /dev/sdp 3.49 TB 100.0%
# ceph-volume --cluster ceph lvm batch --bluestore --yes --dmcrypt /dev/sdf /dev/sdo /dev/sdw /dev/sdd /dev/sdm /dev/sdu /dev/sdb /dev/sdk /dev/sds /dev/sdi /dev/sdq /dev/sdg /dev/sdx /dev/sde /dev/sdn /dev/sdv /dev/sdc /dev/sdl /dev/sdt /dev/sda /dev/sdj /dev/sdr /dev/sdh /dev/sdp --report Total OSDs: 1 Solid State VG: Targets: block.db Total size: 80.30 TB Total LVs: 1 Size per LV: 80.30 TB Devices: /dev/sdo, /dev/sdw, /dev/sdd, /dev/sdm, /dev/sdu, /dev/sdb, /dev/sdk, /dev/sds, /dev/sdi, /dev/sdq, /dev/sdg, /dev/sdx, /dev/sde, /dev/sdn, /dev/sdv, /dev/sdc, /dev/sdl, /dev/sdt, /dev/sda, /dev/sdj, /dev/sdr, /dev/sdh, /dev/sdp Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdf 3.49 TB 100.0% [block.db] vg: vg/lv 80.30 TB 100%
Actions