Bug #5107
closedceph-deploy: on centos 6.3, osd create command should be cleaned up
0%
Description
on centos 6.3, [burnupi05, burnupi21]
while osd create command when used with zapdisk option, does create osds successfully [disks mounted and osd daemon running], it reports "failed to create osds" at the end of the command."zapdisk" option doesn't seem to zap the entire disk.so results in error sometimes.
[ubuntu@burnupi05 ceph-deploy]$ ./ceph-deploy osd create burnupi05:sdb burnupi21:sdc --zap-disk ceph-disk-prepare --zap-disk -- /dev/sdb returned 1 **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. The operation has completed successfully. Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=60948415 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=243793659, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=119039, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 The operation has completed successfully. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! INFO:ceph-disk:Will colocate journal with data on /dev/sdb Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. ceph-disk: Error: Command '['partprobe', '/dev/sdb']' returned non-zero exit status 1 ceph-disk-prepare --zap-disk -- /dev/sdc returned 1 **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. The operation has completed successfully. Information: Moved requested sector from 34 to 2048 in order to align on 2048-sector boundaries. The operation has completed successfully. Information: Moved requested sector from 2097153 to 2099200 in order to align on 2048-sector boundaries. The operation has completed successfully. meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=60948415 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=243793659, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=119039, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 The operation has completed successfully. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! INFO:ceph-disk:Will colocate journal with data on /dev/sdc Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdc (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. ceph-disk: Error: Command '['partprobe', '/dev/sdc']' returned non-zero exit status 1 ceph-deploy: Failed to create 2 OSDs [ubuntu@burnupi05 ceph-deploy]$ ./ceph-deploy disk list burnupi21 /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sdb : /dev/sdb1 other, ext4 /dev/sdc : /dev/sdc1 ceph data, active, cluster ceph, osd.1, journal /dev/sdc2 /dev/sdc2 ceph journal, for /dev/sdc1 /dev/sdd : /dev/sdd1 ceph data, prepared, unknown cluster c653a3d8-4b27-422b-b7a2-60c62a2afcad, osd.0, journal /dev/sdd2 /dev/sdd2 ceph journal, for /dev/sdd1 /dev/sde : /dev/sde1 ceph data, prepared, unknown cluster 98cf9617-b5c8-4399-a0ec-7f5a638390f6, osd.0, journal /dev/sde2 /dev/sde2 ceph journal, for /dev/sde1 /dev/sdf : /dev/sdf1 other, xfs /dev/sdg : /dev/sdg1 other, ext4 /dev/sdh : /dev/sdh1 other, ext4 /dev/sdi : /dev/sdi1 other /dev/sdi2 other /dev/sdi3 other /dev/sdi4 other [ubuntu@burnupi05 ceph-deploy]$ ./ceph-deploy disk list burnupi05 /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sdb : /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2 /dev/sdb2 ceph journal, for /dev/sdb1 /dev/sdc : /dev/sdc1 other, xfs /dev/sdc2 other /dev/sdd other, unknown /dev/sde : /dev/sde1 ceph data, unprepared /dev/sde2 ceph journal /dev/sdf other, ext4 /dev/sdg : /dev/sdg1 ceph data, unprepared /dev/sdg2 ceph journal /dev/sdh other, ext4 /dev/sdi other, ext4
on burnupi05:
[ubuntu@burnupi05 ceph-deploy]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 917G 3.4G 913G 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sdb1 930G 36M 930G 1% /var/lib/ceph/osd/ceph-0
[ubuntu@burnupi05 ceph-deploy]$ ps -ef | grep ceph
root 39145 1 0 16:46 pts/0 00:00:01 /usr/bin/ceph-mon -i burnupi05 --pid-file /var/run/ceph/mon.burnupi05.pid -c /etc/ceph/ceph.conf
root 40535 1 1 16:47 ? 00:00:02 /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf
ubuntu 40890 64481 0 16:51 pts/0 00:00:00 grep ceph
on burnupi21:
[ubuntu@burnupi21 .ssh]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 917G 3.3G 914G 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sdc1 930G 35M 930G 1% /var/lib/ceph/osd/ceph-1
[ubuntu@burnupi21 .ssh]$ ps -ef | grep ceph
root 37445 1 0 16:46 ? 00:00:00 /usr/bin/ceph-mon -i burnupi21 --pid-file /var/run/ceph/mon.burnupi21.pid -c /etc/ceph/ceph.conf
root 38985 1 0 16:48 ? 00:00:00 /usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c /etc/ceph/ceph.conf
ubuntu 39412 56074 0 16:51 pts/0 00:00:00 grep ceph