Bug #15912
closed
An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed
Added by David Zafman almost 8 years ago.
Updated over 6 years ago.
Description
The value of osd_failsafe_full_ratio only restricts new client ops by default after 97% full condition. Could it be that an OSD with large journal could have enough pending filestore data updates that the 3% isn't enough to absorb those updates?
Should new client operations be restricted based on journal size? We could make the value of osd_failsafe_full_ratio an over-ride with default to 0 (use computed value).
- Assignee set to David Zafman
Potentially backfill or recovery used the remaining space.
- Priority changed from Normal to Urgent
- Related to Bug #16878: filestore: utilization ratio calculation does not take journal size into account added
- Related to Bug #18687: bluestore: ENOSPC writing to XFS block file on smithi added
- Related to deleted (Bug #16878: filestore: utilization ratio calculation does not take journal size into account)
- Related to Bug #16878: filestore: utilization ratio calculation does not take journal size into account added
- Status changed from New to Resolved
- Status changed from Resolved to Pending Backport
- Backport set to kraken, jewel
- Related to Feature #15910: Increase the default value of mon_osd_min_in_ratio added
- Copied to Backport #19265: jewel: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed added
- Copied to Backport #19340: kraken: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed added
- Related to Bug #19682: Additional full fixes added
- Related to Bug #19698: cephtool/test.sh error on full tests added
- Related to deleted (Bug #19698: cephtool/test.sh error on full tests)
- Related to Bug #19733: clean up min/max span warning added
- Status changed from Pending Backport to Resolved
Also available in: Atom
PDF