Project

General

Profile

Actions

Bug #5099

closed

io performance / ceph block device

Added by Khanh Nguyen Dang Quoc almost 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 0.61.2,

ceph -s

health HEALTH_OK
monmap e1: 2 mons at {a=ip1:6789/0,b=ip2:6789/0}, election epoch 34, quorum 0,1 a,b
osdmap e86: 2 osds: 2 up, 2 in
pgmap v2905: 776 pgs: 776 active+clean; 26000 MB data, 60642 MB used, 4400 GB / 4459 GB avail
mdsmap e63: 1/1/1 up {0=b=up:active}, 1 up:standby

rbd showmapped

id pool image snap device
1 data image - /dev/rbd1

+ When i do benchmark with command dd if=/dev/zero of=/dev/rbd1 bs=1M count=1000 oflag=direct
-> i have the result is about 90MB/s
+ as changing to new block size dd if=/dev/zero of=/dev/rbd1 bs=1G count=1 oflag=direct
-> the result is about 300MB/s

But when i add the same one write command to block device

dd if=/dev/zero of=/dev/rbd1 bs=1M count=1000 oflag=direct
-> the results i received from 2 tests are about 89MB/s

That means, total throughput on this block device is double (89 *2 MB /s for 2 commands)...

I don't known why it is like that ... anyone please help me fix it. what's wrong in here?

Actions #1

Updated by Ian Colle almost 11 years ago

  • Target version deleted (v0.61 - Cuttlefish)
Actions #2

Updated by Sage Weil almost 11 years ago

  • Status changed from New to Resolved

this is becuase dd is single a single sync io. doing 2 dd procs means you have 2 ios in flight, so 2x as fast. the storage is not the bottleneck, just dd being stupid.

Actions #3

Updated by Khanh Nguyen Dang Quoc almost 11 years ago

dd if=/dev/zero of=/dev/rbd1 bs=1M count=1000 oflag=direct

but I try to test the same command above on nfs file system, i had got the result better than ceph block device is about 300 MB/s.

So i don't know is it exist any configuration to limit throughput per connection on ceph....

Actions

Also available in: Atom PDF