Project

General

Profile

Bug #8591

ceph-disk incorrectly colocates journal when using dm-crypt

Added by Alfredo Deza almost 10 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

If not specifying the journal it fails:

Note: slightly more verbose output from ceph-disk was tweaked to see actual commands

ceph-deploy osd create --dmcrypt node1:sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /Users/alfredo/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.4): /Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy osd create --dmcrypt node1:sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb:
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal None activate True
[node1][INFO  ] Running command: sudo ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=1:+0:+5120M --partition-guid=1:57a3a374-c62f-404e-a98a-9114a5af8448 --typecode=1:45b0969e-9b03-4f30-b4c6-5ec00ceff106 --mbrtogpt -- /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --change-name=1:ceph journal -- /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/cryptsetup remove /dev/mapper/57a3a374-c62f-404e-a98a-9114a5af8448
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --partition-guid=1:d9c52b42-a7b5-49a2-8d5c-306d91083dec --typecode=1:89c57f98-2fe5-4dc0-89c1-5ec00ceff2be -- /dev/sdb
[node1][WARNIN] ceph-disk: Error: Non-zero exit status: 4 Could not create partition 1 from 10485794 to 16777182
[node1][WARNIN] Error encountered; not saving changes.
[node1][WARNIN]
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

This happens because it tries to use the same partition (1) as ceph data for the journal.

Specifying the journal fixes it:

ceph-deploy osd create --dmcrypt node1:sdb:sdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /Users/alfredo/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.4): /Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy osd create --dmcrypt node1:sdb:sdb2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb:/dev/sdb2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal /dev/sdb2 activate True
[node1][INFO  ] Running command: sudo ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb /dev/sdb2
[node1][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --partition-guid=1:ead51fb5-6c20-4021-a8a1-06291c1927ff --typecode=1:89c57f98-2fe5-4dc0-89c1-5ec00ceff2be -- /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/ead51fb5-6c20-4021-a8a1-06291c1927ff --key-size 256 create ead51fb5-6c20-4021-a8a1-06291c1927ff /dev/sdb1
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/mapper/ead51fb5-6c20-4021-a8a1-06291c1927ff
[node1][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime -- /dev/mapper/ead51fb5-6c20-4021-a8a1-06291c1927ff /var/lib/ceph/tmp/mnt.YnL7oi
[node1][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.YnL7oi
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/cryptsetup remove ead51fb5-6c20-4021-a8a1-06291c1927ff
[node1][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-5ec00ceff05d -- /dev/sdb
[node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[node1][INFO  ] checking OSD status...
[node1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

History

#1 Updated by Sage Weil over 9 years ago

  • Status changed from New to Resolved

wip-ceph-disk

Also available in: Atom PDF