Actions
Bug #53589
closedceph-volume unstable provisioning behaviour
% Done:
0%
Source:
Community (user)
Tags:
ceph-volume
Backport:
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When applying a spec.yaml via "ceph orch apply" ceph-volume sometimes fails to deploy onto one or two devices on the target host and/or fails to calculate the block.db device size correctly.
Across 9 hosts of identical hardware, in 5 hosts 100% deployment with the correct block.db device size, in the remaining hosts either one or two devices are not included as working OSDs or some or all block.db partitions are incorrectly calculated.
These hosts have 50 x HDD, and two large SSDs for block.db/wal.
Nothing obvious in the logs although of concern is the paths logged in the logfiles are incorrect and do not include the cluster-uuid in the path name.
Updated by Guillaume Abrioux about 2 years ago
i think it relates to https://github.com/ceph/ceph/pull/44104
Updated by Guillaume Abrioux about 2 years ago
- Status changed from New to Resolved
Actions