Actions
Bug #13081
closeddata on rbd image get corrupted when pool quota is smaller than the size of the rbd image
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):
Description
1. linux kernel is 4.1.1
[root@c8 ceph]# uname -r 4.1.1
2. ceph version of the cluster is 0.94.1
[root@ceph1 ~]# ceph -v ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
3. create a pool p500m with a quota of 500M-bytes:
[root@c8 ceph]# ceph osd pool create p500m 256 [root@c8 ceph]# ceph osd pool set-quota p500m max_bytes 500000000
4. create an image i1000m with a size of 1000M-bytes:
[root@c8 ceph]# rbd create --image-format 2 --size 1000 p500m/i1000m
5. map, mkfs and then mount
[root@c8 ceph]# rbd map p500m/i1000m [root@c8 ceph]# mkfs -t ext4 /dev/rbd2 [root@c8 ceph]# mkdir -p /p500m/i1000m [root@c8 ceph]# mount /dev/rbd2 /p500m/i1000m/
6. 800m.dat is our test data
[root@c8 ceph]# ll -h 800m.dat -rw-r--r--. 1 root root 800M Sep 15 04:55 800m.dat [root@c8 ceph]# md5sum 800m.dat 4d6c96c9be3426e12caec272da25aba1 800m.dat
7. copy test data to image
[root@c8 ceph]# cp 800m.dat /p500m/i1000m/
8. re-mount
[root@c8 ceph]# umount /dev/rbd2 [root@c8 ceph]# mount /dev/rbd2 /p500m/i1000m/
9. data is corrupted
[root@c8 ceph]# md5sum /p500m/i1000m/800m.dat f87b61a45cd3451e512965f2ec06a226 /p500m/i1000m/800m.dat
BTW, if no quota is set on the pool, then everything goes well.
Actions