Project

General

Profile

Actions

Bug #21925

open

cluster capacity is much more smaller than it should be

Added by Jing Li over 6 years ago. Updated over 6 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi all,

I build a ceph cluster on a single host, with 1 mon, 3 osds. One osd created on a file path, two osds created on two hdd partion(1.8T, 1.9T) seperately.

Cluster health status is good, but rados df shows the total capacity is 30G.

# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD   WR_OPS WR   
backups      0       0      0      0                  0       0        0      0    0      0    0 
vms          0       0      0      0                  0       0        0      0    0      0    0 
volumes    363      11      0     11                  0       0        0   2411 303k     14 8192 

total_objects    11
total_used       3234M
total_avail      27485M
total_space      30720M
# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    30720M     27485M        3234M         10.53 
POOLS:
    NAME        ID     USED     %USED     MAX AVAIL     OBJECTS 
    volumes     4       363         0        16878M          11 
    vms         5         0         0        16878M           0 
    backups     6         0         0        16878M           0
# ceph osd status
+----+------------+-------+-------+--------+---------+--------+---------+
| id |    host    |  used | avail | wr ops | wr data | rd ops | rd data |
+----+------------+-------+-------+--------+---------+--------+---------+
| 0  | controller | 1135M | 9602M |    0   |     0   |    0   |     0   |
| 1  |            | 1120M | 9616M |    0   |     0   |    0   |     0   |
| 2  |            | 1136M | 9601M |    0   |     0   |    0   |     0   |
+----+------------+-------+-------+--------+---------+--------+---------+

Can any one figure out what may the problem be? Thanks in advance~
Actions #1

Updated by jianpeng ma over 6 years ago

Can you show the message from "ceph osd df"?

Actions #2

Updated by Sage Weil over 6 years ago

  • Status changed from New to Need More Info

the osd partitions are probably small. 'ceph osd df' and a regular 'df' on the osd host(s) will help.

Actions #3

Updated by Jing Li over 6 years ago

Hi jianpeng, Sage,

Thanks for your attention. It seems that bluestore doesn't support paritioned block device nor directory use as osd. When I use a full disk as osd, everything works fine.

Is it designed so or it's a bug that when use bluestore only full disk can be used as osd?

Actions

Also available in: Atom PDF