Bug #57852
openosd: unhealthy osd cannot be marked down in time
0%
Description
Before an unhealthy osd is marked down by mon, other osd may choose it as
heartbeat peer and then report an incorrect failure time(first_tx) to mon.
reproduce:
Shutdown cluster_network and public_network of an osd node several times.
Files
Updated by Radoslaw Zarzynski over 1 year ago
- Status changed from New to Need More Info
Could you please clarify a bit? Do you mean there some extra, unnecessary (from the POV of jugging whether an OSD is down or not) messages that just update the markdown timestamp?
Updated by wencong wan over 1 year ago
Radoslaw Zarzynski wrote:
Could you please clarify a bit? Do you mean there some extra, unnecessary (from the POV of jugging whether an OSD is down or not) messages that just update the markdown timestamp?
Whether an OSD is down or not is determined by mon.If either of the following two conditions is met, mon will mark an osd as down.
1、Mon does not receive osd_beacon message of an osd for more than 900s(mon_osd_report_timeout)
2、Mon receive failure_report message from 2(mon_osd_min_down_reporters) osds on different host(mon_osd_reporter_subtree_level) and the fault lasted for a period of time(now - fi.get_failed_since() > grace).
get_failed_since return the max failed time of all reporters. if some osd choose the unhealthy osd as heartbeat peer,they will never receive heartbeat reply from the unhealthy osd. So these osds will report the first time of sending heartbeat as the failure time of the unhealthy osd. The condition "now - fi.get_failed_since() > grace" cannot be met.
Updated by Radoslaw Zarzynski over 1 year ago
- Status changed from Need More Info to New
For the detailed explanation!
Updated by Radoslaw Zarzynski over 1 year ago
- Assignee set to Prashant D
Not a something we introduced recently but still worth taking a look if nothing urgent is not the plate.
Updated by Radoslaw Zarzynski over 1 year ago
- Status changed from New to In Progress
Updated by Prashant D about 1 year ago
- Status changed from In Progress to Need More Info
I am working on probable fix for this issue but I could not able to reproduce this issue on vstart cluster by blocking traffic on ports for specific OSD. I am creating a test environment to reproduce this issue. Meanwhile would it be possible to provide debug_mon=20 for mon and debug_osd=25 logs for 2-3 OSDs which are reporting unhealthy OSD as failed ?