Actions
Bug #3286
closedlibrbd, kvm, async io hang
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Fio hangs in a linux-2.6.32 vm on librbd when using direct and libaio, with ceph at c8721b956.
Libvirt configured with direct rbd access like:
<devices> <emulator>/usr/bin/kvm</emulator> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <auth username='test1'> <secret type='ceph' uuid='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'/> </auth> <source protocol='rbd' name='rbd/test1:rbd_cache=1:rbd_cache_size=67108864'> <host name='10.200.1.1' port='6789'/> </source> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk>
w/ an image of Debian Squeeze, kernel 2.6.32-5-amd64.
Using fio config:
# egrep -v '^(#|$)' /usr/share/doc/fio/examples/iometer-file-access-server [global] description=Emulation of Intel IOmeter File Server Access Pattern [iometer] bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 rw=randrw rwmixread=80 direct=1 size=4g ioengine=libaio iodepth=64
...fio hangs consistently (tested 3 times with reboots between):
# fio /usr/share/doc/fio/examples/iometer-file-access-server Jobs: 1 (f=1): [m] [6.6% done] [0K/0K /s] [0/0 iops] [eta 08m:05s] ### hung here
...whereas this completes successfully:
fio <(sed -e's/direct=1/direct=0/' -e's/ioengine=libaio/ioengine=sync/' /usr/share/doc/fio/examples/iometer-file-access-server)
Files
Actions