Actions
Bug #20751
closedosd_state not updated properly during osd-reuse-id.sh
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
when running osd-reuse-id.sh via teuthology i reliably fail an assert about all osds support the stateful mon subscriptions. that happens because when 'osd new' runs the state somehow ends up with teh UP bit set when it shouldn't be.
/a/sage-2017-07-23_01:26:46-rados:standalone-wip-standalone-distro-basic-smithi/1432683
for latest instance (altho there are no logs there)
Updated by Sage Weil over 6 years ago
- Status changed from In Progress to Fix Under Review
Updated by Sage Weil over 6 years ago
- Status changed from Fix Under Review to Resolved
Updated by Sage Weil over 6 years ago
- Status changed from Resolved to In Progress
Hmm, we should also ensure that UP is cleared when doing the destroy, since existing clusters may have osds that !EXISTS but have UP.
Updated by Sage Weil over 6 years ago
- Has duplicate Bug #20679: ceph-disk prepare --osd-id 123 silently uses another id if 123 is not destroyed added
Updated by Sage Weil over 6 years ago
- Status changed from In Progress to Fix Under Review
follow-up defensive change: https://github.com/ceph/ceph/pull/16534
Updated by Sage Weil over 6 years ago
- Status changed from Fix Under Review to Resolved
Updated by Greg Farnum over 6 years ago
- Has duplicate Bug #20933: All mon nodes down when i use ceph-disk prepare a new osd. added
Actions