Project

General

Profile

Actions

Bug #9391

closed

fio rbd driver rewrites same blocks

Added by Sage Weil over 9 years ago. Updated over 7 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Actions #1

Updated by Danny Al-Gaaf over 9 years ago

Could you provide your fio job file / config to verify the issue?

Actions #2

Updated by Josh Durgin over 9 years ago

  • Status changed from New to Need More Info
Actions #3

Updated by Mark Nelson over 9 years ago

Hi Guys,

This is all on the fio side. From what I remember, when you are doing sequential writes and specify multiple "--name" parameters or numjobs > 1 with the librbd engine in fio, each process will write starting at the beginning of the RBD volume. This has various effects, including what looks like a disparity between OSD journal writes and backend data store writes (since early writes to the same blcok can be thrown away if there is a new one). In reality it's not a problem (unless you are benchmarking journal devices!), but it caught us off-guard when we first saw it.

Normally in fio, if you are targeting files or even block devices with --name using sync or libaio you'll write to different offsets just by virtue of the file system allocating different extents (or maybe just because you are targeting different devices). With the librbd engine you don't get this, but one way around it is to just use a single --name/numjobs parameter with a higher iodepth. This may not be exactly the same though (Danny, any thoughts on how much this matters as far as fio goes? Will having fewer fio processes with higher iodepth change the behaviour?)

If the writes are the same size as the RBD objects, we can kind of cheat and use random writes to simulate sequential writes (they are all randomly distributed objects anyway!). In any event, none of this will actually affect users doing real work, just folks that are using the librbd engine with fio and increase numjobs or specify multiple name parameters.

Actions #4

Updated by Danny Al-Gaaf over 9 years ago

@Mark: I have to take a look at fio for that. Is this all about sequential writes only? Do you see a different behavior between multiple jobs and increasing the iodepth instead?

Actions #5

Updated by Josh Durgin over 8 years ago

  • Priority changed from High to Normal
  • Regression set to No
Actions #6

Updated by Jason Dillaman over 7 years ago

  • Status changed from Need More Info to Won't Fix

With exclusive-lock enabled by default, fio jobs either need to use a higher queue depth or use multiple, independent images. Therefore, I don't believe this is an issue worth addressing -- especially since the changes are to fio.

Actions

Also available in: Atom PDF