Project

General

Profile

Actions

Bug #45809

open

When out a osd, the `MAX AVAIL` doesn't change.

Added by chao wang almost 4 years ago. Updated almost 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
ceph,df,MAX AVAIL,
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Environment: Luminous 12.2.12

I have a question about the pool's `MAX AVAIL` of `ceph df`.

When i out a osd, the `MAX AVAIL` doesn't change. Only when i remove the osd from the crush map of the pool, the `MAX AVAIL` decrese.

For example, I have a pool with 10 osd. When 9 osds out, `ceph df` still output the MAX AVAIL with 10 osds, not the remained 1 osd. Only when i remove the out osds from the crush map, the MAX AVAIL changed to the remained size.

But in my understanding, when the osd out, the recovery begin. When the recovery begins, the out osd has no use for the pool. If `MAX AVAIL` doesn't change, It may provide error avail size for the users.

The calculate logic is in `PGMap::get_rules_avail`. Maybe after called `get_rule_weight_osd_map`, we could recalculate the weight map to remove the osd which `osd_info.kb` is zero.

Or could modify the `get_rule_weight_osd_map`, but i don't know whether there is some other impact.

Actions #1

Updated by Greg Farnum almost 3 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (ceph cli)
Actions

Also available in: Atom PDF