Actions
Bug #20870
closedOSD compression: incorrect display of the used disk space
% Done:
0%
Source:
Tags:
Backport:
mimic, luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Hi,
I tested bluestore OSD compression with:
/etc/ceph/ceph.conf
bluestore_compression_mode = aggressive bluestore_compression_algorithm = lz4
and
ceph osd pool set rbd compression_algorithm snappy ceph osd pool set rbd compression_mode aggresive ceph osd pool set rbd compression_required_ratio 0.2
All pools are using a replication of 3.
I got following output of ceph df:
GLOBAL: SIZE AVAIL RAW USED %RAW USED 46560G 44726G 1833G 3.94 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 675G 4.65 13859G 173627 .rgw.root 1 1077 0 13859G 4 default.rgw.control 2 0 0 13859G 8 default.rgw.meta 3 0 0 13859G 0 default.rgw.log 4 0 0 13859G 287 cephfs_metadata 5 47592k 0 13859G 34 cephfs_data 6 289M 0 13859G 80
Cluster was healthy. (no PG partially synced)
675G*3 should give a used RAW size of 2025G and not 1833G
Same here (another example):
GLOBAL: SIZE AVAIL RAW USED %RAW USED 46560G 44058G 2501G 5.37 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 675G 4.75 13525G 173627 .rgw.root 1 1077 0 13525G 4 default.rgw.control 2 0 0 13525G 8 default.rgw.meta 3 0 0 13525G 0 default.rgw.log 4 0 0 13525G 287 cephfs_metadata 5 44661k 0 13525G 79 cephfs_data 6 329G 2.38 13525G 87369
Same here, (675G + 329G)*3 should give 3012G RAW used space and not 2501G.
I played with that "feature" to calculate the compression win I got using snappy, but I guess the output of ceph df should be adapted with the real space used.
Or maybe one new column should be added with the real/compressed space usage?
(And a maybe second one with the compression win percentage :D )
Many thanks!
Actions