Project

General

Profile

Actions

Bug #4388

closed

rbd import broken

Added by Corin Langosch about 11 years ago. Updated about 11 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
bobtail
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Im tried to import a vm image (10 GB) into a bobtail 0.56.3 (6eb7e15a4783b122e9b0c85ea9ba064145958aa5) cluster. However when booting the vm many fs errors were reported and the vm didn't start.

I then stopped the vm, deleted the rbd image and imported again. I then exported it again and checked the md5sums - they were different. I repeated this process several times, trying format1 and format2 and even different pools (hdd and ssd). When I suppiled the same options to the import it always returned the same, wrong md5sum. So it doesn't seem like a hardware problem, otherwise I would expect different md5sums for the same import options. One interesting point is that using the same import options on a different host leads to different md5sums. This was fully reproducable.

I finally ended up writing my own little script which uses librdb and imports the image using 4MB chunks. This worked fine, the testing exports returned the correct md5 and the vm boots :)

But there must be some really serious bug in rbd import.

Here's what I did:

md5sum vm.img
a7851dd0b22cb829833e40237d64af3f vm.img

rbd import --format 2 vm.img clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9 bb.img
Exporting image: 100% complete...done.

md5sum bb.img
411a701ffeb880b3268e74368da3488c bb.img

rbd rm clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Removing image: 100% complete...done.

rbd import --format 2 vm.img clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9 bb.img
Exporting image: 100% complete...done.

md5sum bb.img
411a701ffeb880b3268e74368da3488c bb.img

rbd rm clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Removing image: 100% complete...done.

rbd import --format 1 vm.img clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9 cc.img
Exporting image: 100% complete...done.

md5sum cc.img
411a701ffeb880b3268e74368da3488c cc.img

rbd rm clusterx-ssd/e0df798d-969e-405e-bd93-ba7da1353df9
Removing image: 100% complete...done.

rbd import --format 2 vm.img clusterx-ssd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-ssd/e0df798d-969e-405e-bd93-ba7da1353df9 ssd2.img
Exporting image: 100% complete...done.

md5sum ssd2.img
411a701ffeb880b3268e74368da3488c ssd2.img


on another host, running the exact same configuration and versions of ceph:

md5sum vm.img
a7851dd0b22cb829833e40237d64af3f vm.img

rbd import --format 2 vm.img clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9 bb.img
Exporting image: 100% complete...done.

md5sum bb.img
3ec31398a6f6966a15f3a138250dd641 bb.img

rbd rm clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Removing image: 100% complete...done.

rbd import --format 2 vm.img clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
Importing image: 100% complete...done.

rbd export clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9 cc.img
Exporting image: 100% complete...done.

md5sum cc.img
3ec31398a6f6966a15f3a138250dd641 cc.img

rbd info clusterx-hdd/e0df798d-969e-405e-bd93-ba7da1353df9
rbd image 'e0df798d-969e-405e-bd93-ba7da1353df9':
size 10240 MB in 2560 objects
order 22 (4096 KB objects)
block_name_prefix: rbd_data.33562ae8944a
format: 2
features: layering, striping
stripe unit: 4096 KB
stripe count: 1

Actions

Also available in: Atom PDF