Actions
Bug #4757
closedceph-disk-prepare will not use all available space with >2TB hard drives
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Support
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When sharing the journal with the OSD data, ceph-disk-prepare will not use all the available disk space with disks >2TB.
This has been reproduced on a 3TB HD with VirtualBox (VBoxManage createhd --filename 3TB.vdi --size 3000000 --format VDI --variant Standard).
# ceph-disk-prepare /dev/sde # sgdisk -p /dev/sde Disk /dev/sde: 6144000000 sectors, 2.9 TiB Logical sector size: 512 bytes Disk identifier (GUID): C9183EF7-05CA-461C-B47D-BF9257E69596 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 6143999966 Partitions will be aligned on 2048-sector boundaries Total free space is 1838544895 sectors (876.7 GiB) Number Start (sector) End (sector) Size Code Name 1 1849032704 6143999966 2.0 TiB FFFF ceph data 2 1838544896 1849032670 5.0 GiB FFFF ceph journal
It seems that the journal creation is the issue. The end sector should be 6143999966 instead of 1849032670.
Looking at ceph-disk code, this is immitable with:# sgdisk --new=2:-1024M:0 /dev/sde
Making the journal starts at the beginning of the disk works:
diff --git a/src/ceph-disk b/src/ceph-disk index 28cba37..4abf9c4 100755 --- a/src/ceph-disk +++ b/src/ceph-disk @@ -629,7 +629,7 @@ def prepare_journal_dev( # journal at end of free space so partitioning tools don't # reorder them suddenly num = 2 - journal_part = '{num}:-{size}M:0'.format( + journal_part = '{num}:0:{size}M'.format( num=num, size=journal_size, )
But there is a warning above this code to not do that and my knowledge here is very limited.
When fixed, could this also go to bobtail-dc?
Actions