Actions
Bug #19281
closedluminous - bluestore - ceph df incorrect size
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
luminous v. 12.0.0
all bluestore
total 27*2TB disks and 6*800GB ssd
newly installed cluster shows incorrect size of disks
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
360G 358G 1497M 0.41
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 119G 0
.rgw.root 1 1766 0 119G 4
default.rgw.control 2 0 0 119G 8
default.rgw.data.root 3 0 0 119G 0
default.rgw.gc 4 0 0 119G 32
default.rgw.lc 5 0 0 119G 32
default.rgw.log 6 0 0 119G 128
I see incorrect weights
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.35266 root default
-2 0.11755 host ceph01
0 0.00980 osd.0 up 1.00000 1.00000
1 0.00980 osd.1 up 1.00000 1.00000
2 0.00980 osd.2 up 1.00000 1.00000
9 0.00980 osd.9 up 1.00000 1.00000
10 0.00980 osd.10 up 1.00000 1.00000
...
I can repair through crushmap but
how to calculate data space on bluestore disk from total capacity?
Actions