Project

General

Profile

Actions

Bug #21258

closed

"ceph df"'s MAX AVAIL is not correct

Added by Chang Liu over 6 years ago. Updated over 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED 
    1559T     1034T         525T         33.68 
POOLS:
    NAME                                    ID     USED       %USED     MAX AVAIL     OBJECTS  
    rbd                                     0           0         0          193T            0 
    .rgw.root                               1        2937         0          193T            5 
    default.rgw.control                     2           0         0          193T            8 
    default.rgw.data.root                   3        4310         0          193T           14 
    default.rgw.gc                          4           0         0          193T           32 
    default.rgw.lc                          5           0         0          193T           32 
    default.rgw.log                         6           0         0          193T          128 
    default.rgw.users.uid                   7        1036         0          193T            6 
    default.rgw.users.keys                  8          31         0          193T            3 
    default.rgw.buckets.index               9           0         0          193T          112 
    default.rgw.buckets.data.deprecated     10       101G      0.05          193T        41431 
    default.rgw.buckets.non-ec              11          0         0          193T          385 
    default.rgw.buckets.data                12     14319G      3.12          434T     19418597
Actions #1

Updated by Josh Durgin over 6 years ago

  • Project changed from Ceph to RADOS

What is your crushmap and device sizes? It looks like you may have different roots, hence different space available in different pools.

Actions #2

Updated by Josh Durgin over 6 years ago

  • Status changed from New to Fix Under Review
Actions #3

Updated by Chang Liu over 6 years ago

Josh Durgin wrote:

What is your crushmap and device sizes? It looks like you may have different roots, hence different space available in different pools.

I think there is not different roots. for replicated pool. its available size should be "global available size"/3*"full_ratio".

        {   
            "rule_id": 0,
            "rule_name": "replicated_ruleset",
            "ruleset": 0,
            "type": 1,
            "min_size": 1,
            "max_size": 10,
            "steps": [
                {   
                    "op": "take",
                    "item": -1,
                    "item_name": "default" 
                },
                {   
                    "op": "chooseleaf_firstn",
                    "num": 0,
                    "type": "host" 
                },
                {   
                    "op": "emit" 
                }
            ]
        },
Actions #4

Updated by Chang Liu over 6 years ago

  • Status changed from Fix Under Review to Closed
Actions

Also available in: Atom PDF