Bug #18647
closedceph df output with erasure coded pools
0%
Description
I have 2 clusters with erasure coded pools. Since I upgraded to Jewel, the ceph df output shows erroneous data for the EC pools. The pools are setup with default EC settings of 3/2. The output is showing twice as much %used as it should as if it were calculating it for a replica pool with size 3. The used amount is correct for the pools.
Additionally, one of the clusters shows the correct 'MAX AVAIL' for both replica and EC pools while the other is showing numbers that just don't make sense for either. I only currently have access to the cluster with a valid 'MAX AVAIL'; here is a copy of it's output. It has 2 EC pools (rbd, and cephfs_data) and the total $USED for the cluster is over 100%. The rest of the pools are replica size 3.
$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44679G 17736G 26943G 60.30 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 1 8796G 48.36 9394G 1067213 rbd-cache 2 16283 0 4697G 78 rbd-replica 4 731G 13.47 4697G 191877 cephfs_metadata 6 31775k 0 4697G 10961 cephfs_data 7 7617G 44.78 9394G 1985736 cephfs_cache 8 1726M 0.04 4697G 502
Updated by David Turner about 7 years ago
This seems to still be an issue in Jewel. I am able to use the N and K settings of the EC crush rule to determine that actual % used for an EC pool, but that's hacking fixes into my scripts which will break when this actually gets fixed in Ceph.
Updated by Greg Farnum almost 7 years ago
- Project changed from Ceph to RADOS
- Category changed from ceph cli to Administration/Usability
- Priority changed from Normal to Urgent
- Component(RADOS) Monitor, ceph cli added
Let's verify this prior to Luminous and write a test for it!
Updated by David Turner almost 7 years ago
Is it possible to backport this into Jewel?
Updated by Nathan Cutler almost 7 years ago
First I would need to know the PR numbers of SHA1 hashes of the commits that fix the issue in master.