Project

General

Profile

Actions

Bug #10257

closed

Ceph df doesn't report MAX AVAIL correctly when using rulesets and OSD in ruleset is down and out

Added by Xavier Trilla over 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
High
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
giant,firefly
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In our setup we have two rulesets, one for SSDs and another one for HDDs. Ceph df normally reports the MAX AVAIL space considering the OSDs in the ruleset, but when on of the OSDs is down and out it just reports 0 instead of the real MAX AVAIL space for the pools using that ruleset.

Command "ceph fs" output:

GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
28309G 22750G 5558G 19.63 757k
POOLS:
NAME ID CATEGORY USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE
ssd-volumes-01 118 - 5344M 0.02 0 1472 1472 4861k 5281k
hdd-volumes-01 120 - 0 0 8200G 0 0 0 0

ssd-volumes-01 pool is using ruleset 0 (It has one OSD down and out)
hdd-volumes-01 pool is using ruleset 1 (All OSDs are up)


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #13840: Ceph Pools' MAX AVAIL is 0 if some OSDs' weight is 0ResolvedLoïc Dachary11/20/2015

Actions
Actions

Also available in: Atom PDF