Project

General

Profile

Bug #45809

When out a osd, the `MAX AVAIL` doesn't change.

Added by chao wang about 1 month ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
% Done:

0%

Source:
Community (user)
Tags:
ceph,df,MAX AVAIL,
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Environment: Luminous 12.2.12

I have a question about the pool's `MAX AVAIL` of `ceph df`.

When i out a osd, the `MAX AVAIL` doesn't change. Only when i remove the osd from the crush map of the pool, the `MAX AVAIL` decrese.

For example, I have a pool with 10 osd. When 9 osds out, `ceph df` still output the MAX AVAIL with 10 osds, not the remained 1 osd. Only when i remove the out osds from the crush map, the MAX AVAIL changed to the remained size.

But in my understanding, when the osd out, the recovery begin. When the recovery begins, the out osd has no use for the pool. If `MAX AVAIL` doesn't change, It may provide error avail size for the users.

The calculate logic is in `PGMap::get_rules_avail`. Maybe after called `get_rule_weight_osd_map`, we could recalculate the weight map to remove the osd which `osd_info.kb` is zero.

Or could modify the `get_rule_weight_osd_map`, but i don't know whether there is some other impact.

Also available in: Atom PDF