Bug #3385
closedkrbd: running simple fsstress produces corrupt XFS file system
0%
Description
This does not occur with the current ceph-client/master branch:
35152979 rbd: activate v2 image support
However, when a fix is applied for this:
http://tracker.newdream.net/issues/2852
this problem arises. Initially, Guangliang Zhao provided a
few iterations of fixes. The last one or two I rejected because
running xfstests over rbd fails test 013.
Now I've written my own fix for bug 2852 and my test fails
as well, so I'm a bit concerned that there is another problem
that's unmasked when bug 2852 is fixed.
I have reproduced the problem and narrowed it down to running
this: (I'm trying to get even fewer operations, but this is
pretty good...)
rbd create image1 --size=1000
rbd map image1 # assume we get /dev/rbd1
mkfs.xfs /dev/rbd1
mkdir -p /m
mount /dev/rbd1 /m
mkdir -p /m/out
fsstress -r -s 1351278607 -v -m 8 -n 5 -d /m/out
umount /dev/rbd1
xfs_repair -n /dev/rbd1
This does 5 random operations (reproducible with the given seed)
and when xfs_repair is then run it reports errors.
Here are the five operations:
0/0: rmdir - no directory
0/1: symlink l0XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0
0/1: symlink add id=0,parent=-1
0/2: mknod c1XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0
0/2: mknod add id=1,parent=-1
0/3: fiemap - no filename
0/4: link l0XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX l2XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0
0/4: link add id=2,parent=-1
I'm going to try to recreate a handful of commands that produce
the same result to see if I can remove fsstress from the picture.