Project

General

Profile

Actions

Bug #4876

closed

ceph-deploy: osd create command fails to start the osds on centos 6.3

Added by Tamilarasi muthamizhan almost 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
-
Category:
ceph-deploy
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

while osd create command mounts the disk, it fails to start the osd daemon

tamil@tamil-VirtualBox:~/centos/ceph-deploy$ ./ceph-deploy disk list burnupi05
/dev/sda :
/dev/sda1 other, ext4, mounted on /
/dev/sdb :
/dev/sdb1 other, xfs
/dev/sdb2 other
/dev/sdc other, unknown
/dev/sdd other, unknown
/dev/sde :
/dev/sde1 ceph data, unprepared
/dev/sdf :
/dev/sdf1 ceph data, unprepared
/dev/sdg :
/dev/sdg1 ceph data, unprepared
/dev/sdh :
/dev/sdh1 ceph data, unprepared
/dev/sdi :
/dev/sdi1 other, btrfs
tamil@tamil-VirtualBox:~/centos/ceph-deploy$
tamil@tamil-VirtualBox:~/centos/ceph-deploy$
tamil@tamil-VirtualBox:~/centos/ceph-deploy$ ./ceph-deploy osd create burnupi05:sde --zap-disk
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

tamil@tamil-VirtualBox:~/centos/ceph-deploy$ ./ceph-deploy disk list burnupi05
/dev/sda :
/dev/sda1 other, ext4, mounted on /
/dev/sdb :
/dev/sdb1 other, xfs
/dev/sdb2 other
/dev/sdc other, unknown
/dev/sdd other, unknown
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.0, journal /dev/sde2
/dev/sde2 ceph journal, for /dev/sde1
/dev/sdf :
/dev/sdf1 ceph data, unprepared
/dev/sdg :
/dev/sdg1 ceph data, unprepared
/dev/sdh :
/dev/sdh1 ceph data, unprepared
/dev/sdi :
/dev/sdi1 other, btrfs
tamil@tamil-VirtualBox:~/centos/ceph-deploy$ ./ceph-deploy osd create burnupi05:sdf --zap-disk ********************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended. ********************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
tamil@tamil-VirtualBox:~/centos/ceph-deploy$ ./ceph-deploy disk list burnupi05
/dev/sda :
/dev/sda1 other, ext4, mounted on /
/dev/sdb :
/dev/sdb1 other, xfs
/dev/sdb2 other
/dev/sdc other, unknown
/dev/sdd other, unknown
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.0, journal /dev/sde2
/dev/sde2 ceph journal, for /dev/sde1
/dev/sdf :
/dev/sdf1 ceph data, active, cluster ceph, osd.1, journal /dev/sdf2
/dev/sdf2 ceph journal, for /dev/sdf1
/dev/sdg :
/dev/sdg1 ceph data, unprepared
/dev/sdh :
/dev/sdh1 ceph data, unprepared
/dev/sdi :
/dev/sdi1 other, btrfs

on burnupi05:

[ubuntu@burnupi05 ceph]$ ps ef | grep ceph
root 43543 1 0 12:10 ? 00:00:02 /usr/bin/ceph-mon -i burnupi05 --pid-file /var/run/ceph/mon.burnupi05.pid -c /etc/ceph/ceph.conf
root 43901 1 0 12:11 ? 00:00:01 /usr/bin/ceph-mds -i burnupi05 --pid-file /var/run/ceph/mds.burnupi05.pid -c /etc/ceph/ceph.conf
ubuntu 47873 43601 0 13:50 pts/0 00:00:00 grep ceph
[ubuntu@burnupi05 ceph]$ sudo ceph -s
2013-04-30 13:50:10.096653 7f1c4f296760 1 -
:/0 messenger.start
2013-04-30 13:50:10.097663 7f1c4f296760 1 -- :/47875 --> 10.214.133.10:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0 0x29e49f0 con 0x29e45b0
2013-04-30 13:50:10.098089 7f1c4c852700 1 -- 10.214.133.10:0/47875 learned my addr 10.214.133.10:0/47875
2013-04-30 13:50:10.099257 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 1 ==== mon_map v1 ==== 199+0+0 (2463891249 0 0) 0x7f1c380009f0 con 0x29e45b0
2013-04-30 13:50:10.099469 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 2 ==== auth_reply(proto 2 0 Success) v1 ==== 33+0+0 (105061566 0 0) 0x7f1c38000e20 con 0x29e45b0
2013-04-30 13:50:10.099863 7f1c4d253700 1 -- 10.214.133.10:0/47875 --> 10.214.133.10:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f1c3c001270 con 0x29e45b0
2013-04-30 13:50:10.100890 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 3 ==== auth_reply(proto 2 0 Success) v1 ==== 206+0+0 (2711789552 0 0) 0x7f1c38000e20 con 0x29e45b0
2013-04-30 13:50:10.101149 7f1c4d253700 1 -- 10.214.133.10:0/47875 --> 10.214.133.10:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 0x7f1c3c0019a0 con 0x29e45b0
2013-04-30 13:50:10.102252 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 4 ==== auth_reply(proto 2 0 Success) v1 ==== 409+0+0 (415475918 0 0) 0x7f1c38001050 con 0x29e45b0
2013-04-30 13:50:10.102468 7f1c4d253700 1 -- 10.214.133.10:0/47875 --> 10.214.133.10:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x29e4e40 con 0x29e45b0
2013-04-30 13:50:10.102661 7f1c4f296760 1 -- 10.214.133.10:0/47875 --> 10.214.133.10:6789/0 -- mon_command(status v 0) v1 -- ?+0 0x29e5070 con 0x29e45b0
2013-04-30 13:50:10.102888 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 5 ==== mon_map v1 ==== 199+0+0 (2463891249 0 0) 0x7f1c38000ad0 con 0x29e45b0
2013-04-30 13:50:10.103005 7f1c4d253700 1 -- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (4167029393 0 0) 0x7f1c38001110 con 0x29e45b0
2013-04-30 13:50:10.103274 7f1c4d253700 1 health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
monmap e1: 1 mons at {burnupi05=10.214.133.10:6789/0}, election epoch 2, quorum 0 burnupi05
osdmap e3: 2 osds: 0 up, 0 in
pgmap v4: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
mdsmap e3: 1/1/1 up {0=burnupi05=up:creating}

-- 10.214.133.10:0/47875 <== mon.0 10.214.133.10:6789/0 7 ==== mon_command_ack([status]=0 health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
monmap e1: 1 mons at {burnupi05=10.214.133.10:6789/0}, election epoch 2, quorum 0 burnupi05
osdmap e3: 2 osds: 0 up, 0 in
pgmap v4: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
mdsmap e3: 1/1/1 up {0=burnupi05=up:creating}
v0) v1 ==== 365+0+0 (1793964340 0 0) 0x7f1c38000ad0 con 0x29e45b0
2013-04-30 13:50:10.103408 7f1c4f296760 1 -- 10.214.133.10:0/47875 mark_down_all
2013-04-30 13:50:10.103811 7f1c4f296760 1 -- 10.214.133.10:0/47875 shutdown complete.

leaving the test machine - burnupi05 in the current state.

Actions #1

Updated by Sage Weil almost 11 years ago

  • Status changed from New to Resolved

commit:cd1d6fb3f9b906f13cf281294d9272e1e92a0243

Actions

Also available in: Atom PDF