Project

General

Profile

Actions

Bug #18647

closed

ceph df output with erasure coded pools

Added by David Turner over 7 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
Administration/Usability
Target version:
-
% Done:

0%

Source:
Tags:
erasure coded, ceph df, jewel
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor, ceph cli
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have 2 clusters with erasure coded pools. Since I upgraded to Jewel, the ceph df output shows erroneous data for the EC pools. The pools are setup with default EC settings of 3/2. The output is showing twice as much %used as it should as if it were calculating it for a replica pool with size 3. The used amount is correct for the pools.

Additionally, one of the clusters shows the correct 'MAX AVAIL' for both replica and EC pools while the other is showing numbers that just don't make sense for either. I only currently have access to the cluster with a valid 'MAX AVAIL'; here is a copy of it's output. It has 2 EC pools (rbd, and cephfs_data) and the total $USED for the cluster is over 100%. The rest of the pools are replica size 3.

$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    44679G     17736G       26943G         60.30
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd                 1       8796G     48.36         9394G     1067213
    rbd-cache           2       16283         0         4697G          78
    rbd-replica         4        731G     13.47         4697G      191877
    cephfs_metadata     6      31775k         0         4697G       10961
    cephfs_data         7       7617G     44.78         9394G     1985736
    cephfs_cache        8       1726M      0.04         4697G         502
Actions

Also available in: Atom PDF