Project

General

Profile

Actions

Bug #36531

closed

'MAX AVAIL' in 'ceph df' showing wrong information

Added by ceph ceph over 5 years ago. Updated almost 5 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
5 - suggestion
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-deploy
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have a ceph cluster running with 18 OSDs, also created 3 pools with replicated profile.But the MAX AVAIL in showing wrong information because the size information is not my pool size. It is showing almost 10% of overall size of pool.

# ceph df

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    15 TiB     15 TiB       18 GiB          0.12 
POOLS:
    NAME            ID     USED     %USED     MAX AVAIL     OBJECTS 
    480GB_HDD       4      36 B         0       141 GiB           4 
    1TB_HDD         7      36 B         0       1.2 TiB           4 
    SSD_HDD_HDD     8      36 B         0       295 GiB           4 

Actual size of pools -
480GB_HDD = 1440GB
1TB_HDD = 12TB
SSD_HDD_HDD = 3TB

Actions #1

Updated by John Spray over 5 years ago

  • Description updated (diff)
Actions #2

Updated by John Spray over 5 years ago

  • Severity changed from 2 - major to 5 - suggestion

Hard to say whether this is a bug or not without more information. If you had e.g. size=10 pools, this would be correct output. You might like to add the output of "ceph osd pool ls detail" and "ceph osd tree".

There are a couple of threads on the ceph-users about understanding the output of df at the moment, you might find them instructive.

Actions #3

Updated by Greg Farnum over 5 years ago

  • Project changed from Ceph to mgr
Actions #4

Updated by Jan Fajerski almost 5 years ago

  • Status changed from New to Closed

no feedback

Actions

Also available in: Atom PDF