Bug #9035
ceph cluster is using more space than actual data after replication
0%
Description
Ceph cluster is using more space than estimated space to store data after replication.
Total cluster capacity is 5920GB and have written 1500GB of data. With rep count as 3, it should use 4500GB of space to store 1500GB of data.
But its showing used space as 4988GB.
What would be this extra 488GB of data?
Description:
Cluster has 8 SSDs each of 800G raw capacity.
After xfs file system total capacity of 8 OSDs = 740G * 8 = 5920G
Replication count is 3.
Created 6 RBDs each of size is 250G.
Total capacity of 6 RBDs together= 250G * 6 = 1500G
As replication count is 3, total used space on the cluster shoube be = 1500G * 3 = 4500G
But I am seeing total capacity used as 4988GB.
Please let me know any specific logs required.
History
#1 Updated by Sage Weil over 9 years ago
- Status changed from New to Closed
- Assignee deleted (
Sage Weil)
the used is simply summing the statfs(2) results on all the OSDs. you can see this by doing a df on the osd volumes, or by looking at 'ceph pg dump osds -f json-pretty'.
the extra space is a combinatoin of xfs overhead (inodes, metadata), leveldb, and cluster metadata stored in teh current/meta directory on the OSDs.