Bug #47429
openceph df used % and max avail numbers very wrong
0%
Description
This is about my hdd class, we only have 2 pools only HDDs.
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.0 PiB 359 TiB 1.6 PiB 1.6 PiB 82.35
POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs_data 1 519 TiB 881.86M 1.6 PiB 94.42 32 TiB
archive 13 15 TiB 24.33M 27 TiB 21.71 65 TiB
Now I dont understand why my cephfs is running full despite having 359TB avaiable and cephfs_data as target ratio of 0.9, but that is only for autoscaler no?
archive used % number makes no sense either
This major because at 95% full the cephfs pool will stop IO.
v 14.2.10
Updated by Neha Ojha over 3 years ago
CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 2.0 PiB 359 TiB 1.6 PiB 1.6 PiB 82.35 POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs_data 1 519 TiB 881.86M 1.6 PiB 94.42 32 TiB archive 13 15 TiB 24.33M 27 TiB 21.71 65 TiB
Updated by Neha Ojha over 3 years ago
- Status changed from New to Need More Info
Can you please provide the output of "ceph df detail -f json-pretty"?