Project

General

Profile

Actions

Bug #10139

open

librbd cpu usage 4x higher than krbd

Added by alexandre derumier over 9 years ago. Updated over 9 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

librbd cpu usage is quite huge currently, around 4-5x higher than krbd.

(Tested with fio+krbd vs fio+librbd, random read 4K)

Perf report is attached to this tracker

Mailing list discussion about this:
http://article.gmane.org/gmane.comp.file-systems.ceph.devel/22089/match=client+cpu+usage+kbrd+vs+librbd+perf+report

Sage:
----

I'm a bit suprised by the some of the items near the top
(bufferlist.clear() callers). I'm sure several of those can be
streamlined to avoid temporary bufferlists. I don't see any super
egregious users of the allocator, though.

The memcpy callers might be a good place to start...

sage

Mark
----
Wasn't josh looking into some of this a year ago? Did anything ever
come of that work?

Haomai Wang
-----------
Hmm, I think it's a good perf topic to discuss about buffer
alloc/dealloc. For example, maybe frequency alloced object can use
memory pool(each pool stores the same objects), but the most challenge
to this is also STL structures.


Files

report.txt (656 KB) report.txt alexandre derumier, 11/19/2014 05:50 AM
perf.report (1.14 MB) perf.report Jason Dillaman, 01/07/2015 07:39 AM
Actions

Also available in: Atom PDF