Actions
Bug #36531
closed'MAX AVAIL' in 'ceph df' showing wrong information
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
5 - suggestion
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-deploy
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I have a ceph cluster running with 18 OSDs, also created 3 pools with replicated profile.But the MAX AVAIL in showing wrong information because the size information is not my pool size. It is showing almost 10% of overall size of pool.
# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 15 TiB 15 TiB 18 GiB 0.12 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS 480GB_HDD 4 36 B 0 141 GiB 4 1TB_HDD 7 36 B 0 1.2 TiB 4 SSD_HDD_HDD 8 36 B 0 295 GiB 4
Actual size of pools -
480GB_HDD = 1440GB
1TB_HDD = 12TB
SSD_HDD_HDD = 3TB
Updated by John Spray over 5 years ago
- Severity changed from 2 - major to 5 - suggestion
Hard to say whether this is a bug or not without more information. If you had e.g. size=10 pools, this would be correct output. You might like to add the output of "ceph osd pool ls detail" and "ceph osd tree".
There are a couple of threads on the ceph-users about understanding the output of df at the moment, you might find them instructive.
Actions