Project

General

Profile

Actions

Bug #19281

closed

luminous - bluestore - ceph df incorrect size

Added by Petr Malkov about 7 years ago. Updated almost 7 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

luminous v. 12.0.0

all bluestore
total 27*2TB disks and 6*800GB ssd

newly installed cluster shows incorrect size of disks

ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
360G 358G 1497M 0.41
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 119G 0
.rgw.root 1 1766 0 119G 4
default.rgw.control 2 0 0 119G 8
default.rgw.data.root 3 0 0 119G 0
default.rgw.gc 4 0 0 119G 32
default.rgw.lc 5 0 0 119G 32
default.rgw.log 6 0 0 119G 128

I see incorrect weights
ceph osd tree

ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.35266 root default
-2 0.11755 host ceph01
0 0.00980 osd.0 up 1.00000 1.00000
1 0.00980 osd.1 up 1.00000 1.00000
2 0.00980 osd.2 up 1.00000 1.00000
9 0.00980 osd.9 up 1.00000 1.00000
10 0.00980 osd.10 up 1.00000 1.00000
...

I can repair through crushmap but
how to calculate data space on bluestore disk from total capacity?

Actions #1

Updated by Petr Malkov about 7 years ago

crushmap is changed

ceph osd tree
-2 2.21999 host ceph01-ssd
0 0.73999 osd.0 up 1.00000 1.00000
1 0.73999 osd.1 up 1.00000 1.00000
2 0.73999 osd.2 up 1.00000 1.00000
...
-7 16.20000 host ceph03-hdd
27 1.79999 osd.27 up 1.00000 1.00000
28 1.79999 osd.28 up 1.00000 1.00000
29 1.79999 osd.29 up 1.00000 1.00000
30 1.79999 osd.30 up 1.00000 1.00000

still the same

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
360G 357G 3066M 0.83
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 91223M 0
.rgw.root 1 1766 0 91223M 4
Actions #2

Updated by Sage Weil almost 7 years ago

My guess is you have a very small disk and the minimum size of the bluefs (rocksdb) portion of the device is skewing things. How big are your devices?

The crush weight and the reported available space are not directly related except that on osd creation the crush weight is initialized to the device size.

I've checked my environment and reported size matches. for example,

<prE>
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 84.11198 - 84227G 25747G 58479G 30.57 1.00 - root default
-4 24.78899 - 24435G 8765G 15670G 35.87 1.17 - host cpach
9 0.93100 1.00000 953G 280G 673G 29.42 0.96 13 osd.9
10 0.93100 0 0 0 0 0 0 0 osd.10
11 0.93100 1.00000 953G 437G 516G 45.88 1.50 15 osd.11
12 0.93100 1.00000 953G 612G 341G 64.20 2.10 22 osd.12
13 0.93100 1.00000 953G 627G 326G 65.76 2.15 28 osd.13
14 0.93100 1.00000 953G 295G 657G 31.02 1.01 11 osd.14
16 0.93100 1.00000 953G 457G 496G 47.92 1.57 19 osd.16
17 0.93100 1.00000 953G 440G 513G 46.16 1.51 16 osd.17
18 0.93100 1.00000 953G 408G 545G 42.82 1.40 16 osd.18
0 0.93100 1.00000 953G 130G 822G 13.73 0.45 8 osd.0
1 0.92599 1.00000 948G 248G 699G 26.25 0.86 14 osd.1
2 1.81898 1.00000 1862G 782G 1080G 42.01 1.37 20 osd.2
3 1.81898 1.00000 1862G 580G 1282G 31.13 1.02 17 osd.3
4 1.81898 1.00000 1862G 675G 1187G 36.26 1.19 19 osd.4
5 1.81898 1.00000 1862G 459G 1403G 24.65 0.81 12 osd.5
6 1.81898 1.00000 1862G 640G 1222G 34.36 1.12 19 osd.6
7 1.81898 1.00000 1862G 510G 1352G 27.41 0.90 14 osd.7
8 1.81898 1.00000 1862G 371G 1491G 19.93 0.65 14 osd.8
15 1.81898 1.00000 1862G 806G 1056G 43.30 1.42 24 osd.15

Actions #3

Updated by Sage Weil almost 7 years ago

  • Status changed from New to Can't reproduce
Actions

Also available in: Atom PDF