Bug #18647
closedceph df output with erasure coded pools
0%
Description
I have 2 clusters with erasure coded pools. Since I upgraded to Jewel, the ceph df output shows erroneous data for the EC pools. The pools are setup with default EC settings of 3/2. The output is showing twice as much %used as it should as if it were calculating it for a replica pool with size 3. The used amount is correct for the pools.
Additionally, one of the clusters shows the correct 'MAX AVAIL' for both replica and EC pools while the other is showing numbers that just don't make sense for either. I only currently have access to the cluster with a valid 'MAX AVAIL'; here is a copy of it's output. It has 2 EC pools (rbd, and cephfs_data) and the total $USED for the cluster is over 100%. The rest of the pools are replica size 3.
$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44679G 17736G 26943G 60.30 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 1 8796G 48.36 9394G 1067213 rbd-cache 2 16283 0 4697G 78 rbd-replica 4 731G 13.47 4697G 191877 cephfs_metadata 6 31775k 0 4697G 10961 cephfs_data 7 7617G 44.78 9394G 1985736 cephfs_cache 8 1726M 0.04 4697G 502