Project

General

Profile

Actions

Bug #14525

closed

why SSD cache pool can't improve RBD performance?

Added by chuanhong wang about 8 years ago. Updated about 8 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

software: centos7+ceph-0.94.5+openstack firefly
In my ceph cluster, pool "volumes" is used by openstack cinder. In order to improve performance,
I have assigned a cache pool "ssd-pool", whose datas are stored in ssd, for pool "volumes", but the performance became even poor.

the test result is:
before add cache(IOPS) after adding ssd cache(IOPS)
4K randread 24919 17569
4K randwrite 2763 1949
512K randread 2259 2133
512K randwrite 529 401

and my cluster info is

[root@ceph102 ~]# ceph osd dump |grep pool
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 'volumes' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 3309 lfor 3309 flags hashpspool tiers 4 read_tier 4 write_tier 4 stripe_width 0
pool 2 'images' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 2416 flags hashpspool stripe_width 0
pool 3 'vms' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1824 flags hashpspool stripe_width 0
pool 4 'ssd-pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 3310 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 0s x0 stripe_width 0

[root@ceph102 ~]# ceph osd crush rule dump
[ {
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [ {
"op": "take",
"item": -1,
"item_name": "default"
}, {
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
}, {
"op": "emit"
}
]
}, {
"rule_id": 1,
"rule_name": "ssd-rule",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [ {
"op": "take",
"item": -5,
"item_name": "ssd-root"
}, {
"op": "choose_firstn",
"num": 0,
"type": "osd"
}, {
"op": "emit"
}
]
}
]

[root@ceph102 ~]# ceph osd df tree
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR TYPE NAME
-5 1.07996 - 1117G 660G 456G 59.15 1.10 root ssd-root
30 0.35999 1.00000 372G 220G 152G 59.15 1.10 osd.30
31 0.35999 1.00000 372G 220G 152G 59.15 1.10 osd.31
32 0.35999 1.00000 372G 220G 152G 59.15 1.10 osd.32
-1 8.09967 - 8377G 4459G 3918G 53.23 0.99 root default
-2 2.69989 - 2792G 1481G 1311G 53.03 0.98 host ceph104
1 0.26999 1.00000 279G 149G 130G 53.37 0.99 osd.1
4 0.26999 1.00000 279G 138G 141G 49.50 0.92 osd.4
7 0.26999 1.00000 279G 148G 131G 53.00 0.98 osd.7
10 0.26999 1.00000 279G 146G 132G 52.58 0.98 osd.10
14 0.26999 1.00000 279G 131G 147G 47.25 0.88 osd.14
17 0.26999 1.00000 279G 149G 129G 53.62 0.99 osd.17
20 0.26999 1.00000 279G 174G 104G 62.65 1.16 osd.20
23 0.26999 1.00000 279G 139G 139G 50.12 0.93 osd.23
25 0.26999 1.00000 279G 151G 127G 54.19 1.00 osd.25
27 0.26999 1.00000 279G 150G 128G 54.06 1.00 osd.27
-3 2.69989 - 2792G 1489G 1303G 53.32 0.99 host ceph103
2 0.26999 1.00000 279G 144G 134G 51.92 0.96 osd.2
5 0.26999 1.00000 279G 159G 119G 57.25 1.06 osd.5
9 0.26999 1.00000 279G 134G 144G 48.29 0.90 osd.9
12 0.26999 1.00000 279G 147G 131G 52.81 0.98 osd.12
16 0.26999 1.00000 279G 154G 124G 55.24 1.02 osd.16
19 0.26999 1.00000 279G 133G 145G 47.83 0.89 osd.19
22 0.26999 1.00000 279G 131G 148G 46.91 0.87 osd.22
24 0.26999 1.00000 279G 152G 126G 54.72 1.01 osd.24
28 0.26999 1.00000 279G 152G 126G 54.76 1.02 osd.28
0 0.26999 1.00000 279G 177G 101G 63.50 1.18 osd.0
-4 2.69989 - 2792G 1489G 1303G 53.33 0.99 host ceph102
3 0.26999 1.00000 279G 127G 151G 45.76 0.85 osd.3
6 0.26999 1.00000 279G 153G 125G 54.99 1.02 osd.6
8 0.26999 1.00000 279G 152G 126G 54.66 1.01 osd.8
11 0.26999 1.00000 279G 137G 141G 49.28 0.91 osd.11
13 0.26999 1.00000 279G 177G 101G 63.68 1.18 osd.13
15 0.26999 1.00000 279G 140G 139G 50.14 0.93 osd.15
18 0.26999 1.00000 279G 144G 135G 51.61 0.96 osd.18
21 0.26999 1.00000 279G 152G 126G 54.63 1.01 osd.21
26 0.26999 1.00000 279G 159G 119G 57.14 1.06 osd.26
29 0.26999 1.00000 279G 143G 135G 51.44 0.95 osd.29
TOTAL 9495G 5120G 4374G 53.93
MIN/MAX VAR: 0.85/1.18 STDDEV: 4.57
[root@ceph102 ~]#

[root@ceph102 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
9495G 4374G 5120G 53.93
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 303G 3.20 1014G 120040
volumes 1 2010G 21.17 1014G 735455
images 2 57340M 0.59 1014G 7184
vms 3 25544M 0.26 1014G 3291
ssd-pool 4 216G 2.28 152G 69380
[root@ceph102 ~]#


Files

Actions #1

Updated by Samuel Just about 8 years ago

  • Status changed from New to Rejected

A bug isn't really a good forum for this. I suggest you take this data and post to ceph-devel, people will be very interested!

Actions

Also available in: Atom PDF