Project

General

Profile

Actions

Bug #41325

closed

Performance degradation after ceph installation and especially after OSDs restart

Added by kobi ginon over 4 years ago. Updated over 4 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (dev)
Tags:
luminous bluestore
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi
We experience a significant decline in storage performance some time after a fresh installation of ceph luminous blue store version 12.2.8 with openstack.
What is worse is that the performance gets much worse after restarting all OSDs.
The test is running fio on ceph volumes that attached to VMs.
Here some results to illustrate the performance decline:
Setup: 6 computes, 3 storage nodes.
6 VMs on each computes (6 * 8 = 48 VMS)
Fio test: rand, 80% - write, 20% - read, block size: 64k.
Results before restarting OSDs:
First run Run 2 Run 3 Run 4 Run 5 Run 6
Write bw MB/s 3927 3712.2 1978 1763 1763 1872
Write: IOPS 59955 56637 30199 26908 26908 28559
READ bw MB/s 990 9356 498 434.39 434.39 461
read: IOPS 15087 14259 7577 6770 6770 7174
From certain run (in the example run 3), degradation can be seen.
(from that point, the following results are stabilized until restarting OSDs)
After restarting OSD the degradation is about 30% even worse
Results after restarting OSDs:
After restart Run 2 after restart
Write bw MB/s 1057 1066
Write: IOPS 16176 16176
READ bw MB/s 262 1066.6
read: IOPS 4071 16279

- Also after deleting and recreating the VMs and volumes – we experience this degradation
- Also rados bench mark tests reflect this behavior
For reference fio test:
[global]
ioengine=libaio
numjobs=1
cpus_allowed=0-3
cpus_allowed_policy=split
iodepth=4
name=randreadwrite
rw=randrw
rwmixread=20
rwmixwrite=80
loops=120
runtime=900
ramp_time=5
size=40G
bs=64k
direct=1
group_reporting

[job_1]
filename=/dev/vdb

Thanks much

Actions #1

Updated by Patrick Donnelly over 4 years ago

  • Status changed from New to Rejected

The appropriate forum for seeking help like this is the ceph-users mailing list.

Actions

Also available in: Atom PDF