Bug #804
closedread performance slow
0%
Description
Ceph cluster is configured and running with 64 OSD’s daemon( 64 hard disk ), 1 Metadata servers(MDS) and 1 Monitor( MON) daemon. 64OSD’s is running under four data node servers . Each data node server has 16 OSD daemons. MDS and MON running on another servers
I configured cephx for authentication between nodes and journal configured on each OSD hard disk as file. OSD journal size is 4096 MB.
I have two clients mounted the ceph under /mnt/ceph.
All the system running ubuntu 10.10 64 bit edition. for performance test we used big media files. single file size is 18 GB.
I did the performance test read and write. i got the write performance 78 MB/s. But i got the read performance only 28 MB/s. Why read performance is very slow?.
Please find my config file
[global]
auth supported = cephx
debug ms = 0
keyring = /etc/ceph/keyring.bin
[mon]
mon data = /mnt/data/mon$id
[mon0]
host = cephmonitor
mon addr = 10.1.1.100:6789
[mds]
debug mds = 1
keyring = /etc/ceph/keyring.$name
[mds0]
host = cephmonitor
[mds1]
host = cephmonitor01
[osd]
sudo = true
osd data = /mnt/osd$id/data
keyring = /etc/ceph/keyring.$name
osd journal = /mnt/osd$id/data/journal
osd journal size = 4096
debug osd = 1
debug filestore = 1
[osd0]
host = cephnode001
btrfs devs = /dev/sdb
....
[osd63]
host = cephnode102
btrfs devs = /dev/sdq
Please give me the solution