Project

General

Profile

Actions

Bug #1753

closed

ceph copy raw images from qemu incorrectly

Added by max mikheev over 12 years ago. Updated over 7 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,
Ceph cannot correctly handle raw images from qemu incorrectly:

oneadmin@s2-8core:~/OpenNebula/var/images/tmp$ date; qemu-img create f raw test.raw 3G; date; ls -la; date; cp -v test.raw test2.raw; date
Fri Nov 25 16:49:01 EST 2011
Formatting 'test.raw', fmt=raw size=3221225472
Fri Nov 25 16:49:01 EST 2011
total 3145728
drwxr-xr-x 1 oneadmin cloud 0 2011-11-25 16:49 .
drwxrws--T 1 oneadmin cloud 232742453458 2011-11-25 16:47 ..
-rw-r--r-
1 oneadmin cloud 3221225472 2011-11-25 16:49 test.raw
Fri Nov 25 16:49:02 EST 2011
`test.raw' -> `test2.raw'
Fri Nov 25 16:52:09 EST 2011

oneadmin@s2-8core:~/OpenNebula/var/images/tmp$ ls lsh
total 6.0G
3.0G -rw-r--r-
1 oneadmin cloud 3.0G 2011-11-25 16:52 test2.raw
3.0G rw-r--r- 1 oneadmin cloud 3.0G 2011-11-25 16:49 test.raw

As you can see an empty image on ceph has actual size 3G. Same operation on ext4:

root@s2-8core:~/tmp# qemu-img create test.raw 3G
Formatting 'test.raw', fmt=raw size=3221225472
root@s2-8core:~/tmp# ls lsh
total 0
0 -rw-r--r-
1 root root 3.0G 2011-11-25 17:02 test.raw
An actual size is 0.

oneadmin@s2-8core:~/OpenNebula/var/images/tmp$ ceph --version
ceph version 0.38 (commit:b600ec2ac7c0f2e508720f8e8bb87c3db15509b9)
oneadmin@s2-8core:~/OpenNebula/var/images/tmp$

In version 0.34 image it worked correctly, created image had size 0.
The expected results described here: http://balau82.wordpress.com/2011/05/08/qemu-raw-images-real-size/

Thanks

Actions #1

Updated by Sage Weil over 12 years ago

  • Category set to librbd
  • Target version set to v0.40
Actions #2

Updated by Sage Weil over 12 years ago

  • Translation missing: en.field_position set to 3
Actions #3

Updated by Josh Durgin over 12 years ago

  • Category changed from librbd to 1

This is using the ceph filesystem, not rbd.

Actions #4

Updated by Greg Farnum over 12 years ago

I'm a little confused here. Ceph has never reported only the used space for a file; doing so is prohibitively expensive due to striping. Although the fact that it doesn't support something like ls -s is a different issue, I don't think your method of testing this would ever have worked (quite apart from whether it stores the file as a sparse one or not). Did you test this same way previously? And can you be sure (by checking the cluster used space) that it's actually creating those blocks, or is it just the reporting that's bad (as I expect).

Actions #5

Updated by max mikheev over 12 years ago

The file copy took 3 minutes. It is ok for 3Gb file but not for 100Kb file.

Actions #6

Updated by Josh Durgin over 12 years ago

To create the sparse file qemu-img just calls ftruncate. It does nothing fs-specific, so this can be replicated with dd as well:

ubuntu@sepia2:~/mnt_ceph$ sudo dd if=/dev/zero of=test.raw_dd seek=50M count=0
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.8718e-05 s, 0.0 kB/s
ubuntu@sepia2:~/mnt_ceph$ ls -ls test.raw_dd
26214400 -rw-r--r-- 1 root root 26843545600 2011-11-30 14:51 test.raw_dd
Actions #7

Updated by Greg Farnum over 12 years ago

  • Status changed from New to Won't Fix

Unfortunately, right now making Ceph report sparse files correctly would be prohibitively expensive. It can be done, it's just not on the roadmap and we don't have the available effort for it.
However, if you input sparse files they will be stored as sparse files, so you will at least get the space efficiency with an appropriately-set toolchain.

Actions #8

Updated by Sage Weil over 12 years ago

  • Target version deleted (v0.40)
  • Translation missing: en.field_position deleted (13)
  • Translation missing: en.field_position set to 1
Actions #9

Updated by John Spray over 7 years ago

  • Project changed from Ceph to CephFS
  • Category deleted (1)

Bulk updating project=ceph category=mds bugs so that I can remove the MDS category from the Ceph project to avoid confusion.

Actions

Also available in: Atom PDF