Project

General

Profile

Benchmark Ceph Cluster Performance » History » Version 3

Vikhyat Umrao, 08/31/2016 07:53 AM

1 1 Jessica Mack
h1. Benchmark Ceph Cluster Performance
2
3
One of the most common questions we hear is "How do I check if my cluster is running at maximum performance?". Wonder no more - in this guide, we'll walk you through some tools you can use to benchmark your Ceph cluster.
4
 
5
*NOTE*: The ideas in this article are based on "Sebastian Han's blog post":http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/,  "TelekomCloud's blog post":https://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html and inputs from Ceph developers and engineers.
6
7
h3. Get Baseline Performance Statistics
8
9
Fundamentally, benchmarking is all about comparison. You won't know if you Ceph cluster is performing below par unless you first identify what its maximum possible performance is. So, before you start benchmarking your cluster, you need to obtain baseline performance statistics for the two main components of your Ceph infrastructure: your disks and your network.
10
11
h4. Benchmark Your Disks
12
13
The simplest way to benchmark your disk is with dd. Use the following command to read and write a file, remembering to add the oflag parameter to bypass the disk page cache:
14
@shell> dd if=/dev/zero of=here bs=1G count=1 oflag=direct@
15
16 2 Jessica Mack
!{width:50%}image1.png!
17 1 Jessica Mack
18
Note the last statistic provided, which indicates disk performance in MB/sec. Perform this test for each disk in your cluster, noting the results.
19 2 Jessica Mack
20
h4. Benchmark Your Network
21
22
Another key factor affecting Ceph cluster performance is network thoroughput. A good tool for this is "_iperf_":https://iperf.fr/, which uses a client-server connection to measure TCP and UDP bandwidth.
23 1 Jessica Mack
 
24 2 Jessica Mack
You can install _iperf_ using _apt-get install iperf_ or _yum install iperf_.
25
26
_iperf_ needs to be installed on at least two nodes in your cluster. Then, on one of the nodes, start the _iperf_ server using the following command:
27
28
@shell> iperf -s@
29
30
On another node, start the client with the following command, remembering to use the IP address of the node hosting the _iperf_ server:
31
32
@shell> iperf -c 192.168.1.1@
33
34
!{width:50%}image2.png!
35
36 1 Jessica Mack
Note the bandwidth statistic in Mbits/sec, as this indicates the maximum throughput supported by your network.
37
 
38
Now that you have some baseline numbers, you can start benchmarking your Ceph cluster to see if it's giving you similar performance. Benchmarking can be performed at different levels: you can perform low-level benchmarking of the storage cluster itself, or you can perform higher-level benchmarking of the key interfaces, such as block devices and object gateways. The following sections discuss each of these approaches.
39
 
40
NOTE: Before running any of the benchmarks in subsequent sections, drop all caches using a command like this:
41 2 Jessica Mack
@shell> sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync@
42
43
h3. Benchmark a Ceph Storage Cluster
44
45
Ceph includes the _rados bench_ command, designed specifically to benchmark a RADOS storage cluster. To use it, create a storage pool and then use _rados bench_ to perform a write benchmark, as shown below.
46 1 Jessica Mack
 
47 2 Jessica Mack
The _rados_ command is included with Ceph.
48
49
@shell> ceph osd pool create scbench 100 100
50
shell> rados bench -p scbench 10 write --no-cleanup@
51
52
!{width:50%}image3.png!
53
54
This creates a new pool named 'scbench' and then performs a write benchmark for 10 seconds. Notice the _--no-cleanup_ option, which leaves behind some data. The output gives you a good indicator of how fast your cluster can write data.
55 1 Jessica Mack
Two types of read benchmarks are available: seq for sequential reads and rand for random reads. To perform a read benchmark, use the commands below:
56 2 Jessica Mack
@shell> rados bench -p scbench 10 seq
57
shell> rados bench -p scbench 10 rand@
58
59
!{width:50%}image4.png!
60
61
You can also add the _-t_ parameter to increase the concurrency of reads and writes (defaults to 16 threads), or the _-b_ parameter to change the size of the object being written (defaults to 4 MB). It's also a good idea to run multiple copies of this benchmark against different pools, to see how performance changes with multiple clients.
62 1 Jessica Mack
Once you have the data, you can begin comparing the cluster read and write statistics with the disk-only benchmarks performed earlier, identify how much of a performance gap exists (if any), and start looking for reasons.
63
You can clean up the benchmark data left behind by the write benchmark with this command:
64 2 Jessica Mack
65
@shell> rados -p scbench cleanup@
66
67
h3. Benchmark a Ceph Block Device
68
69
If you're a fan of Ceph block devices, there are two tools you can use to benchmark their performance. Ceph already includes the _rbd bench_ command, but you can also use the popular I/O benchmarking tool "_fio_":http://git.kernel.dk/?p=fio.git;a=summary, which now comes with built in support for RADOS block devices.
70
71
The _rbd_ command is included with Ceph. RBD support in _fio_ is relatively new, therefore you will need to download it from its repository and then compile and install it using_ configure && make && make install_. Note that you must install the librbd-dev development package with _apt-get install librbd-dev_ or _yum install librbd-dev_ before compiling _fio_ in order to activate its RBD support.
72
73 1 Jessica Mack
Before using either of these two tools, though, create a block device using the commands below:
74 2 Jessica Mack
@shell> ceph osd pool create rbdbench 100 100
75 1 Jessica Mack
shell> rbd create image01 --size 1024 --pool rbdbench
76
shell> sudo rbd map image01 --pool rbdbench --name client.admin
77
shell> sudo /sbin/mkfs.ext4 -m0 /dev/rbd/rbdbench/image01
78
shell> sudo mkdir /mnt/ceph-block-device
79 2 Jessica Mack
shell> sudo mount /dev/rbd/rbdbench/image01 /mnt/ceph-block-device@
80
81
The _rbd bench-write_ command generates a series of sequential writes to the image and measure the write throughput and latency. Here's an example:
82
83
@shell> rbd bench-write image01 --pool=rbdbench@
84
85
!{width:50%}image5.png!
86
87
Or, you can use _fio_ to benchmark your block device. An example _rbd.fio_ template is included with the _fio_ source code, which performs a 4K random write test against a RADOS block device via librbd. Note that you will need to update the template with the correct names for your pool and device, as shown below.
88
@[global]
89 1 Jessica Mack
ioengine=rbd
90 3 Vikhyat Umrao
clientname=admin
91 1 Jessica Mack
pool=rbdbench
92
rbdname=image01
93
rw=randwrite
94
bs=4k
95
[rbd_iodepth32]
96 2 Jessica Mack
iodepth=32@
97
98
Then, run _fio_ as follows:
99
@shell> fio examples/rbd.fio@
100
101
!{width:50%}image6.png!
102
103
h3. Benchmark a Ceph Object Gateway
104
105
When it comes to benchmarking the Ceph object gateway, look no further than _swift-bench_, the benchmarking tool included with OpenStack Swift. The _swift-bench_ tool tests the performance of your Ceph cluster by simulating client PUT and GET requests and measuring their performance.
106 1 Jessica Mack
 
107 2 Jessica Mack
You can install _swift-bench_ using _pip install swift && pip install swift-bench_.
108
109
To use _swift-bench_, you need to first create a gateway user and subuser, as shown below:
110
@shell> sudo radosgw-admin user create --uid="benchmark" --display-name="benchmark"
111 1 Jessica Mack
shell> sudo radosgw-admin subuser create --uid=benchmark --subuser=benchmark:swift
112
--access=full
113
shell> sudo radosgw-admin key create --subuser=benchmark:swift --key-type=swift
114
--secret=guessme
115 2 Jessica Mack
shell> radosgw-admin user modify --uid=benchmark --max-buckets=0@
116
117 1 Jessica Mack
Next, create a configuration file for swift-bench on a client host, as below. Remember to update the authentication URL to reflect that of your Ceph object gateway and to use the correct user name and credentials.
118 2 Jessica Mack
@[bench]
119 1 Jessica Mack
auth = http://gateway-node/auth/v1.0
120
user = benchmark:swift
121
key = guessme
122 2 Jessica Mack
auth_version = 1.0@
123
124
You can now run a benchmark as below. Use the _-c_ parameter to adjust the number of concurrent connections (this example uses 64) and the _-s_ parameter to adjust the size of the object being written (this example uses 4K objects). The _-n_ and _-g_ parameters control the number of objects to PUT and GET respectively.
125
@shell> swift-bench -c 64 -s 4096 -n 1000 -g 100 /tmp/swift.conf@
126
127
!image7.png!
128
129
Although _swift-bench_ measures performance in number of objects/sec, it's easy enough to convert this into MB/sec, by multiplying by the size of each object. However, you should be wary of comparing this directly with the baseline disk performance statistics you obtained earlier, since a number of other factors also influence these statistics, such as:
130
* the level of replication (and latency overhead)
131
* full data journal writes (offset in some situations by journal data coalescing)
132
* fsync on the OSDs to guarantee data safety
133
* metadata overhead for keeping data stored in RADOS
134
* latency overhead (network, ceph, etc) makes readahead more important
135 1 Jessica Mack
 
136
TIP: When it comes to object gateway performance, there's no hard and fast rule you can use to easily improve performance. In some cases, Ceph engineers have been able to obtain better-than-baseline performance using clever caching and coalescing strategies, whereas in other cases, object gateway performance has been lower than disk performance due to latency, fsync and metadata overhead.
137 2 Jessica Mack
138
h3. Conclusion
139
140 1 Jessica Mack
There are a number of tools available to benchmark a Ceph cluster, at different levels: disk, network, cluster, device and gateway. You should now have some insight into how to approach the benchmarking process and begin generating performance data for your cluster. Good luck!