Bug #41929
closedInconsistent reporting of STORED/USED in ceph df
0%
Description
ceph df
reports inconsistent values in the STORED
or USED
(not %USED
). Notice how the cephfs.storage.data
pool in the listing below reports 67TiB and 221TiB. That corresponds to about a 3x replication factor + 0.3 allocation overhead (it's a lot of small files, unfortunately).
The RGW data pool is an erasure-coded pool with k=6, m=4 and it reports 426TiB and 659TiB, respectively, which is only a little more than the minimum expected factor of 1.5.
However, the RGW buckets index pool XXX.rgw.buckets.index
as well as the CephFS meta data pool cephfs.storage.meta
report approximately the same value for both STORED
and USED
despite having a replication factor of 5 (!). I get the same result even after changing the replication to 3x (and back to 5x).
POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL XXX.rgw.otp 49 0 B 0 0 B 0 2.8 PiB .rgw.root 63 28 KiB 36 11 MiB 0 1.7 PiB XXX.rgw.control 80 0 B 8 0 B 0 1.7 PiB XXX.rgw.log 81 161 B 210 384 KiB 0 2.8 PiB XXX.rgw.buckets.non-ec 84 4.2 MiB 21 7.0 MiB 0 2.8 PiB XXX.rgw.buckets.data 100 426 TiB 112.34M 659 TiB 7.11 5.6 PiB XXX.rgw.buckets.index 101 285 MiB 140 285 MiB 0 1.7 PiB XXX.rgw.meta 102 21 KiB 66 20 MiB 0 1.7 PiB cephfs.storage.ganesha 107 0 B 1 0 B 0 2.8 PiB cephfs.storage.meta 110 179 GiB 40.11M 180 GiB 0 1.7 PiB cephfs.storage.data 112 67 TiB 164.70M 221 TiB 2.50 2.8 PiB cephfs.storage.data.ec 113 14 GiB 108.79k 73 GiB 0 5.6 PiB