Project

General

Profile

Actions

Bug #47429

open

ceph df used % and max avail numbers very wrong

Added by Anonymous over 3 years ago. Updated over 3 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This is about my hdd class, we only have 2 pools only HDDs.

CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 PiB 359 TiB 1.6 PiB 1.6 PiB 82.35

POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs_data 1 519 TiB 881.86M 1.6 PiB 94.42 32 TiB
archive 13 15 TiB 24.33M 27 TiB 21.71 65 TiB

Now I dont understand why my cephfs is running full despite having 359TB avaiable and cephfs_data as target ratio of 0.9, but that is only for autoscaler no?
archive used % number makes no sense either
This major because at 95% full the cephfs pool will stop IO.
v 14.2.10

Actions #1

Updated by Neha Ojha over 3 years ago

CLASS SIZE    AVAIL    USED   RAW USED %RAW USED
hdd   2.0 PiB 359 TiB 1.6 PiB 1.6 PiB   82.35

POOL       ID STORED  OBJECTS  USED    %USED MAX AVAIL
cephfs_data 1 519 TiB 881.86M 1.6 PiB 94.42 32 TiB
archive    13 15 TiB  24.33M   27 TiB  21.71 65 TiB
Actions #2

Updated by Neha Ojha over 3 years ago

  • Status changed from New to Need More Info

Can you please provide the output of "ceph df detail -f json-pretty"?

Actions

Also available in: Atom PDF