Bug #52730
ceph-volume mis-calculates db/wal slot size for clusters that have multiple PVs in a VG
% Done:
0%
Source:
Tags:
Backport:
pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Prior to v15.2.8, ceph-volume was creating a single VG to contain multiple db/wal PVs. After v15.2.8, this was corrected so that each fast drive gets its own VG. The current logic for calculating db/wal slot sizes within ceph-volume makes the assumption that each PV has a single VG, and miscalculates required slot sizes on clusters where this isn't true. This can cause Ceph to "think" that there isn't enough space on db/wal drives to deploy new OSDs, when there actually is.
Related issues
History
#1 Updated by Cory Snyder over 2 years ago
- Target version set to v17.0.0
- Backport set to pacific,octopus
- Affected Versions v15.2.15 added
- Affected Versions deleted (
v15.2.8, v15.2.9, v16.0.0, v16.0.1, v16.1.0, v16.1.1, v16.2.0, v16.2.1, v16.2.2, v16.2.3, v16.2.4, v16.2.5, v16.2.7)
#2 Updated by Cory Snyder over 2 years ago
- Pull request ID set to 43300
#3 Updated by Sebastian Wagner over 2 years ago
- Project changed from Ceph to ceph-volume
#4 Updated by Guillaume Abrioux over 2 years ago
- Status changed from New to Pending Backport
#5 Updated by Guillaume Abrioux over 2 years ago
- Copied to Backport #53277: octopus: ceph-volume mis-calculates db/wal slot size for clusters that have multiple PVs in a VG added
#6 Updated by Guillaume Abrioux over 2 years ago
- Copied to Backport #53278: pacific: ceph-volume mis-calculates db/wal slot size for clusters that have multiple PVs in a VG added
#7 Updated by Loïc Dachary over 2 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".