Actions
Bug #58857
closedceph-volume: fast device alloc size is not computed properly when there are multiple PVs per VG
% Done:
0%
Source:
Tags:
backport_processed
Backport:
reef,quincy,pacific
Regression:
Yes
Severity:
2 - major
Reviewed:
Description
Clusters deployed prior to v15.2.8 were grouping multiple fast device PVs in a single VG. There is a regression in ceph-volume that causes it to incorrectly compute slot sizes for these clusters.
The regression was introduced here: https://github.com/ceph/ceph/pull/46666
Updated by Guillaume Abrioux about 1 year ago
- Status changed from New to Fix Under Review
- Pull request ID set to 50279
Updated by Guillaume Abrioux about 1 year ago
- Backport changed from quincy,pacific to reef,quincy,pacific
Updated by Guillaume Abrioux about 1 year ago
- Status changed from Fix Under Review to Pending Backport
Updated by Backport Bot about 1 year ago
- Copied to Backport #59310: reef: ceph-volume: fast device alloc size is not computed properly when there are multiple PVs per VG added
Updated by Backport Bot about 1 year ago
- Copied to Backport #59311: pacific: ceph-volume: fast device alloc size is not computed properly when there are multiple PVs per VG added
Updated by Backport Bot about 1 year ago
- Copied to Backport #59312: quincy: ceph-volume: fast device alloc size is not computed properly when there are multiple PVs per VG added
Updated by Guillaume Abrioux about 1 year ago
- Assignee changed from Cory Snyder to Guillaume Abrioux
Updated by Guillaume Abrioux about 1 year ago
- Project changed from Orchestrator to ceph-volume
Updated by Guillaume Abrioux about 1 year ago
- Status changed from Pending Backport to Resolved
Actions