Bug #8292
closedceph-disk prepare output not explicit on too small disk
0%
Description
While trying to use "ceph-disk prepare /dev/sdb", where /dev/sdb has only a 1Go partition,
we get this output
STDOUT: Creating new GPT entries.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
STDERR: INFO:ceph-disk:Will colocate journal with data on /dev/sdb
Could not create partition 2 from 34 to 10485760
Unable to set partition 2's name to 'ceph journal'!
Could not change partition 2's type code to 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
ceph-disk: Error: Command '['/sbin/sgdisk', '--new=2:0:5120M', '--change-name=2:ceph journal', '--partition-guid=2:1f827f05-0ab6-442b-8a87-6158edefc304', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/sdb']' returned non-zero exit status 4
This is not really explicit and we don't get to see what's wrong.
Updated by Alfredo Deza almost 10 years ago
- Status changed from New to In Progress
This is going to be tricky.
I have a working solution, with nice/informational output, but the calls keep going and the execution does not halt, so
even though it is raising an error, later on it says the operation has completed successfully.
$ ceph-deploy osd --zap create node1:/dev/sdb [ceph_deploy.conf][DEBUG ] found configuration file at: /Users/alfredo/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.2): /Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy osd --zap create node1:/dev/sdb [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb: [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise [ceph_deploy.osd][DEBUG ] Deploying osd to node1 [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [node1][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add [ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal None activate True [node1][INFO ] Running command: sudo ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdb [node1][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating [node1][WARNIN] backup header from main header. [node1][WARNIN] [node1][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb [node1][WARNIN] ERROR:ceph-disk:refusing to create journal on /dev/sdb [node1][WARNIN] ERROR:ceph-disk:journal size (5120M) is bigger than device (2048M) [node1][WARNIN] ceph-disk: Error: /dev/sdb device size (2048M) is not big enough for journal [node1][DEBUG ] **************************************************************************** [node1][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk [node1][DEBUG ] verification and recovery are STRONGLY recommended. [node1][DEBUG ] **************************************************************************** [node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or [node1][DEBUG ] other utilities. [node1][DEBUG ] The operation has completed successfully. [node1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdb [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
Updated by Alfredo Deza almost 10 years ago
- Status changed from In Progress to Fix Under Review
Pull request opened: https://github.com/ceph/ceph/pull/1874
Updated by Sage Weil almost 10 years ago
- Status changed from Fix Under Review to Resolved
- Source changed from other to Community (dev)