Bug #9434
closedrbd rm hangs
0%
Description
I'm using ceph 0.71(maybe it's a little old)
I did some performance measurements on ceph these days, but I met some problems:
? The fio using ioengine=rbd sometimes became blocked. As I aways run hundreds of fio processes for each measurement, there were always some processes blocked so that the whole work couldn't be finished.
read-64k-1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=rbd, iodepth=32
fio-2.1.10
Starting 1 process
rbd engine: RBD version: 0.1.8
Jobs: 1 (f=0): [I(1)] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 03m:20s]
? I tried to use rbd rm to remove the images, but it was blocked too
-sh-4.1$ for i in `sudo rbd ls`; do sudo rbd rm $i; done
Removing image: 100% complete...
Removing image: 100% complete...
Removing image: 6% complete...
But if I used Python API, this wouldn't happen..
Is it a rbd bug? In newer version(i.e. 0.86), is it fixed?
If it is, I will consider to upgrade the cluster.
Thanks!