Project

General

Profile

Actions

Bug #7385

closed

Objectcacher setting max object counts too low

Added by Mark Nelson about 10 years ago. Updated almost 9 years ago.

Status:
Resolved
Priority:
High
Assignee:
Jason Dillaman
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
hammer,firefly
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

It appears that the objectcacher is setting max object counts based on the max dirty data size and object size. With default RBD block size and RBD cache values, this was max objects = 42 according to objectcacher logs. In performance testing performed on our new test hardware, this caused severe rbd cache thrashing with high amounts of concurrent IO (8 threads with io_depth = 16 each). With lower amounts of concurrent IO, journal writes on the OSDs were on average around 400-450K in size (with a default 512 max_sectors_kb value). With higher concurrent IOs, this decreased to around 12-16K default journal write size and significantly lower performance.

This may also explain the odd graphs that we saw in our cuttlefish testing with multiple volumes on a QEMU/KVM guest as the io depth increased:

http://ceph.com/wp-content/uploads/2014/07/cuttlefish-rbd_btrfs-write-0004K.png

This can likely be worked around in current code by increasing the rbd dirty cache limits, but ultimately may be fixed by changing the behavior for how object limits are set in the objectcacher.


Related issues 2 (0 open2 closed)

Copied to rbd - Backport #11730: Objectcacher setting max object counts too lowResolvedLoïc Dachary02/10/2014Actions
Copied to rbd - Backport #11731: Objectcacher setting max object counts too lowResolvedNathan Cutler02/10/2014Actions
Actions

Also available in: Atom PDF