Project

General

Profile

Benchmark Ceph Cluster Performance » History » Version 1

Jessica Mack, 06/21/2015 01:00 AM

1 1 Jessica Mack
h1. Benchmark Ceph Cluster Performance
2
3
One of the most common questions we hear is "How do I check if my cluster is running at maximum performance?". Wonder no more - in this guide, we'll walk you through some tools you can use to benchmark your Ceph cluster.
4
 
5
*NOTE*: The ideas in this article are based on "Sebastian Han's blog post":http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/,  "TelekomCloud's blog post":https://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html and inputs from Ceph developers and engineers.
6
7
h3. Get Baseline Performance Statistics
8
9
Fundamentally, benchmarking is all about comparison. You won't know if you Ceph cluster is performing below par unless you first identify what its maximum possible performance is. So, before you start benchmarking your cluster, you need to obtain baseline performance statistics for the two main components of your Ceph infrastructure: your disks and your network.
10
11
h4. Benchmark Your Disks
12
13
The simplest way to benchmark your disk is with dd. Use the following command to read and write a file, remembering to add the oflag parameter to bypass the disk page cache:
14
@shell> dd if=/dev/zero of=here bs=1G count=1 oflag=direct@
15
16
!image1.png!
17
18
19
Note the last statistic provided, which indicates disk performance in MB/sec. Perform this test for each disk in your cluster, noting the results.
20
Benchmark Your Network
21
Another key factor affecting Ceph cluster performance is network thoroughput. A good tool for this is iperf, which uses a client-server connection to measure TCP and UDP bandwidth.
22
 
23
You can install iperf using apt-get install iperf or yum install iperf.
24
iperf needs to be installed on at least two nodes in your cluster. Then, on one of the nodes, start the iperf server using the following command:
25
shell> iperf -s
26
On another node, start the client with the following command, remembering to use the IP address of the node hosting the iperf server:
27
shell> iperf -c 192.168.1.1
28
image2.png
29
Note the bandwidth statistic in Mbits/sec, as this indicates the maximum throughput supported by your network.
30
 
31
Now that you have some baseline numbers, you can start benchmarking your Ceph cluster to see if it's giving you similar performance. Benchmarking can be performed at different levels: you can perform low-level benchmarking of the storage cluster itself, or you can perform higher-level benchmarking of the key interfaces, such as block devices and object gateways. The following sections discuss each of these approaches.
32
 
33
NOTE: Before running any of the benchmarks in subsequent sections, drop all caches using a command like this:
34
shell> sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync
35
Benchmark a Ceph Storage Cluster
36
Ceph includes the rados bench command, designed specifically to benchmark a RADOS storage cluster. To use it, create a storage pool and then use rados bench to perform a write benchmark, as shown below.
37
 
38
The rados command is included with Ceph.
39
shell> ceph osd pool create scbench 100 100
40
shell> rados bench -p scbench 10 write --no-cleanup
41
image3.png
42
This creates a new pool named 'scbench' and then performs a write benchmark for 10 seconds. Notice the --no-cleanup option, which leaves behind some data. The output gives you a good indicator of how fast your cluster can write data.
43
Two types of read benchmarks are available: seq for sequential reads and rand for random reads. To perform a read benchmark, use the commands below:
44
shell> rados bench -p scbench 10 seq
45
shell> rados bench -p scbench 10 rand
46
image4.png
47
You can also add the -t parameter to increase the concurrency of reads and writes (defaults to 16 threads), or the -b parameter to change the size of the object being written (defaults to 4 MB). It's also a good idea to run multiple copies of this benchmark against different pools, to see how performance changes with multiple clients.
48
Once you have the data, you can begin comparing the cluster read and write statistics with the disk-only benchmarks performed earlier, identify how much of a performance gap exists (if any), and start looking for reasons.
49
You can clean up the benchmark data left behind by the write benchmark with this command:
50
shell> rados -p scbench cleanup
51
Benchmark a Ceph Block Device
52
If you're a fan of Ceph block devices, there are two tools you can use to benchmark their performance. Ceph already includes the rbd bench command, but you can also use the popular I/O benchmarking tool fio, which now comes with built in support for RADOS block devices.
53
The rbd command is included with Ceph. RBD support in fio is relatively new, therefore you will need to download it from its repository and then compile and install it using configure && make && make install. Note that you must install the librbd-dev development package with apt-get install librbd-dev or yum install librbd-dev before compiling fio in order to activate its RBD support.
54
Before using either of these two tools, though, create a block device using the commands below:
55
shell> ceph osd pool create rbdbench 100 100
56
shell> rbd create image01 --size 1024 --pool rbdbench
57
shell> sudo rbd map image01 --pool rbdbench --name client.admin
58
shell> sudo /sbin/mkfs.ext4 -m0 /dev/rbd/rbdbench/image01
59
shell> sudo mkdir /mnt/ceph-block-device
60
shell> sudo mount /dev/rbd/rbdbench/image01 /mnt/ceph-block-device
61
The rbd bench-write command generates a series of sequential writes to the image and measure the write throughput and latency. Here's an example:
62
shell> rbd bench-write image01 --pool=rbdbench
63
image5.png
64
Or, you can use fio to benchmark your block device. An example rbd.fio template is included with the fio source code, which performs a 4K random write test against a RADOS block device via librbd. Note that you will need to update the template with the correct names for your pool and device, as shown below.
65
[global]
66
ioengine=rbd
67
clientname=client.admin
68
pool=rbdbench
69
rbdname=image01
70
rw=randwrite
71
bs=4k
72
[rbd_iodepth32]
73
iodepth=32
74
Then, run fio as follows:
75
shell> fio examples/rbd.fio
76
image6.png
77
Benchmark a Ceph Object Gateway
78
When it comes to benchmarking the Ceph object gateway, look no further than swift-bench, the benchmarking tool included with OpenStack Swift. The swift-bench tool tests the performance of your Ceph cluster by simulating client PUT and GET requests and measuring their performance.
79
 
80
You can install swift-bench using pip install swift && pip install swift-bench.
81
To use swift-bench, you need to first create a gateway user and subuser, as shown below:
82
shell> sudo radosgw-admin user create --uid="benchmark" --display-name="benchmark"
83
shell> sudo radosgw-admin subuser create --uid=benchmark --subuser=benchmark:swift
84
--access=full
85
shell> sudo radosgw-admin key create --subuser=benchmark:swift --key-type=swift
86
--secret=guessme
87
shell> radosgw-admin user modify --uid=benchmark --max-buckets=0
88
Next, create a configuration file for swift-bench on a client host, as below. Remember to update the authentication URL to reflect that of your Ceph object gateway and to use the correct user name and credentials.
89
[bench]
90
auth = http://gateway-node/auth/v1.0
91
user = benchmark:swift
92
key = guessme
93
auth_version = 1.0
94
You can now run a benchmark as below. Use the -c parameter to adjust the number of concurrent connections (this example uses 64) and the -s parameter to adjust the size of the object being written (this example uses 4K objects). The -n and -g parameters control the number of objects to PUT and GET respectively.
95
shell> swift-bench -c 64 -s 4096 -n 1000 -g 100 /tmp/swift.conf
96
image7.png
97
Although swift-bench measures performance in number of objects/sec, it's easy enough to convert this into MB/sec, by multiplying by the size of each object. However, you should be wary of comparing this directly with the baseline disk performance statistics you obtained earlier, since a number of other factors also influence these statistics, such as:
98
the level of replication (and latency overhead)
99
full data journal writes (offset in some situations by journal data coalescing)
100
fsync on the OSDs to guarantee data safety
101
metadata overhead for keeping data stored in RADOS
102
latency overhead (network, ceph, etc) makes readahead more important
103
 
104
TIP: When it comes to object gateway performance, there's no hard and fast rule you can use to easily improve performance. In some cases, Ceph engineers have been able to obtain better-than-baseline performance using clever caching and coalescing strategies, whereas in other cases, object gateway performance has been lower than disk performance due to latency, fsync and metadata overhead.
105
ConclusionEdit section
106
There are a number of tools available to benchmark a Ceph cluster, at different levels: disk, network, cluster, device and gateway. You should now have some insight into how to approach the benchmarking process and begin generating performance data for your cluster. Good luck!