Actions
Bug #52730
closedceph-volume mis-calculates db/wal slot size for clusters that have multiple PVs in a VG
% Done:
0%
Source:
Tags:
Backport:
pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Prior to v15.2.8, ceph-volume was creating a single VG to contain multiple db/wal PVs. After v15.2.8, this was corrected so that each fast drive gets its own VG. The current logic for calculating db/wal slot sizes within ceph-volume makes the assumption that each PV has a single VG, and miscalculates required slot sizes on clusters where this isn't true. This can cause Ceph to "think" that there isn't enough space on db/wal drives to deploy new OSDs, when there actually is.
Actions