Actions
Bug #47290
closedosdmaps aren't being cleaned up automatically on healthy cluster
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
nautilus,octopus
Regression:
Yes
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
in https://github.com/ceph/ceph/pull/19076/commits/e62269c8929e414284ad0773c4a3c82e43735e4e, we made a mistake. we should not take down osd into consideration when trimming osdmap. in e62269c892, we decrease the upper bound of range of osdmaps to be trimmed if the given osd is out. but we should have to decrease it only if the
osd in question is still in.
Updated by Kefu Chai over 3 years ago
- Copied from Bug #37875: osdmaps aren't being cleaned up automatically on healthy cluster added
Updated by Kefu Chai over 3 years ago
- Status changed from New to Fix Under Review
- Regression changed from No to Yes
- Severity changed from 3 - minor to 2 - major
- Pull request ID set to 36977
- Component(RADOS) Monitor added
Updated by Neha Ojha over 3 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Neha Ojha over 3 years ago
- Copied to Backport #47296: nautilus: osdmaps aren't being cleaned up automatically on healthy cluster added
Updated by Neha Ojha over 3 years ago
- Backport changed from octopus, nautilus to nautilus,octopus
Updated by Neha Ojha over 3 years ago
- Copied to Backport #47297: octopus: osdmaps aren't being cleaned up automatically on healthy cluster added
Updated by Nathan Cutler over 3 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
Actions