Project

General

Profile

Actions

Bug #10956

closed

upgrade from 0.80.7 to 0.80.8 make read request become slow

Added by Wanyuan Yang about 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I use Ceph+Openstack in our private cloud. I upgrade our centos6.5 based cluster from Ceph Emperor to Ceph Firefly. when upgrade to 0.80.7,on client the vm's io is good,but when upgrade to 0.80.8,the vm's read request is become slow.

My ceph cluster is 400osd,5mons:
ceph -s
health HEALTH_OK
monmap e11: 5 mons at {BJ-M1-Cloud71=172.28.2.71:6789/0,BJ-M1-Cloud73=172.28.2.73:6789/0,BJ-M2-Cloud80=172.28.2.80:6789/0,BJ-M2-Cloud81=172.28.2.81:6789/0,BJ-M3-Cloud85=172.28.2.85:6789/0}, election epoch 198, quorum 0,1,2,3,4 BJ-M1-Cloud71,BJ-M1-Cloud73,BJ-M2-Cloud80,BJ-M2-Cloud81,BJ-M3-Cloud85
osdmap e120157: 400 osds: 400 up, 400 in
pgmap v26161895: 29288 pgs, 6 pools, 20862 GB data, 3014 kobjects
41084 GB used, 323 TB / 363 TB avail
29288 active+clean
client io 52640 kB/s rd, 32419 kB/s wr, 5193 op/s
The follwing is my ceph client conf :
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 172.29.204.24,172.29.204.48,172.29.204.55,172.29.204.58,172.29.204.73
mon_initial_members = ZR-F5-Cloud24, ZR-F6-Cloud48, ZR-F7-Cloud55, ZR-F8-Cloud58, ZR-F9-Cloud73
fsid = c01c8e28-304e-47a4-b876-cb93acc2e980
mon osd full ratio = .85
mon osd nearfull ratio = .75
public network = 172.29.204.0/24
mon warn on legacy crush tunables = false
[osd]
osd op threads = 12
filestore journal writeahead = true
filestore merge threshold = 40
filestore split multiple = 8
[client]
rbd cache = true
rbd cache writethrough until flush = false
rbd cache size = 67108864
rbd cache max dirty = 50331648
rbd cache target dirty = 33554432
[client.cinder]
admin socket = /var/run/ceph/rbd-$pid.asok
My VM is 8core16G,we use fio scripts is : 
fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randread -size=60G -filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randwrite -size=60G -filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=read -size=60G -filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=write -size=60G -filename=/dev/vdb -name="EBS" -iodepth=32 -runtime=200
The following is the io test result
ceph client verison :0.80.7
read: bw=430MB
write: bw=420MB
randread: iops=4875 latency=65ms
randwrite: iops=6844 latency=46ms
ceph client verison :0.80.8 
read: bw=115MB
write: bw=480MB
randread: iops=381 latency=83ms
randwrite: iops=4843 latency=68ms

something more can scan :http://www.spinics.net/lists/ceph-users/msg15595.html

Actions #1

Updated by Jason Dillaman about 9 years ago

Actions #2

Updated by Jason Dillaman about 9 years ago

  • Status changed from New to Pending Backport

Backport is including in the forthcoming v0.80.9 release.

Actions #3

Updated by Josh Durgin about 9 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF