Project

General

Profile

Actions

Bug #2535

closed

rbd: random data corruption in vm

Added by Sage Weil almost 12 years ago. Updated almost 12 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

From ML:

Date: Thu, 07 Jun 2012 20:04:09 +0200
From: Guido Winkelmann <guido-ceph@thisisnotatest.de>
To: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>
Subject: Random data corruption in VM, possibly caused by rbd
Parts/Attachments:
   1 Shown     47 lines  Text
   2          1.4 KB     Application
----------------------------------------

Hi,

I'm using Ceph with RBD to provide network-transparent disk images for KVM-
based virtual servers. The last two days, I've been hunting some weird elusive 
bug where data in the virtual machines would be corrupted in weird ways. It 
usually manifests in files having some random data - usually zeroes - at the 
start before the actual contents that should be in there start.

To track this down, I wrote a simple io tester. It does the following:

- Create 1 Megabyte of random data
- Calculate the SHA256 hash of that data
- Write the data to a file on the harddisk, in a given directory, using the 
hash as the filename
- Repeat until the disk is full
- Delete the last file (because it is very likely to be incompletely written)
- Read and delete all the files just written while checking that their sha256 
sums are equal to their filenames

When running this io tester in a VM that uses a qcow2 file on a local harddisk 
for its virtual disk, no errors are found. When the same VM is running using 
rbd, the io tester finds on average about one corruption every 200 Megabytes, 
reproducably.

(As in an interesting aside, the io tester also prints how long it took to 
read or write 100 MB, and it turns out reading the data back in again is about 
three times slower than writing them in the first place...)

Ceph is version 0.47.2. Qemu KVM is 1.0, compiled with the spec file from 
http://pkgs.fedoraproject.org/gitweb/?p=qemu.git;a=summary
(And compiled after ceph 0.47.2 was installed on that machine, so it would use 
the correct headers...)
Both the Ceph cluster and the KVM host machines are running on Fedora 16, with 
a fairly recent 3.3.x kernel.
The ceph cluster uses btrf for the osd's data dirs. The journal is on a tmpfs. 
(This is not a production setup - luckily.)
The virtual machine is using ext4 as its filesystem.
There were no obvious other problems with either the ceph cluster or the KVM 
host machines.

I have attached a copy of the ceph.conf in use, in case it might be helpful.

This is a huge problem, and any help in tracking it down would be much 
appreciated.

Regards,

Happens with rbd cache off, but not with it on.

Oliver sees this too when he turns off the cache.

http://ceph.com/qa/iotester.tgz
(precise compiled version at http://ceph.com/qa/iotester)
(precise compiled fiemap at http://ceph.com/qa/fiemap)

I was unable to reproduce this with:
- precise 3.2.0-23 ceph, btrfs w/ 1 replica, or ext4 w/ 2 replicas
- precise 3.2.0-23 host
- precise 3.2.0-23 8GB guest, ext4
(but it was my laptop, ssd, all one machine, so timing may be different)


Files

rb.0.13.00000000045a.block.bz2 (421 KB) rb.0.13.00000000045a.block.bz2 Guido Winkelmann, 06/12/2012 05:32 AM
osd.2.log.bz2 (16.4 MB) osd.2.log.bz2 Guido Winkelmann, 06/12/2012 05:32 AM
Actions

Also available in: Atom PDF