Actions
Bug #5199
closedceph-deploy: on fedora18, osd create command doesnt seem to mount the disks
Status:
Resolved
Priority:
High
Assignee:
-
Category:
ceph-deploy
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
test setup: burnupi22
while osd create command succeeds with no error, the osd disks are not mounted and the osd processes are not running yet,
[ubuntu@burnupi22 ceph-deploy]$ ./ceph-deploy osd create burnupi22:/dev/sdc burnupi22:/dev/sdd --zap-disk [ubuntu@burnupi22 ceph-deploy]$ df -h Filesystem Size Used Avail Use% Mounted on rootfs 917G 5.0G 912G 1% / devtmpfs 7.8G 0 7.8G 0% /dev tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs 7.9G 24M 7.9G 1% /run tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/sda1 917G 5.0G 912G 1% / tmpfs 7.9G 6.0M 7.9G 1% /tmp [ubuntu@burnupi22 ceph-deploy]$ ./ceph-deploy disk list burnupi22 /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sdb : /dev/sdb1 other, ext4 /dev/sdc : /dev/sdc1 ceph data, prepared, cluster ceph, osd.0, journal /dev/sdc2 /dev/sdc2 ceph journal, for /dev/sdc1 /dev/sdd : /dev/sdd1 ceph data, prepared, cluster ceph, osd.1, journal /dev/sdd2 /dev/sdd2 ceph journal, for /dev/sdd1 /dev/sde : /dev/sde1 ceph data, prepared, unknown cluster d9e7a4ae-e535-4c69-ba3c-5fc031e25bb1, osd.7 /dev/sdf : /dev/sdf1 other, xfs /dev/sdg : /dev/sdg1 other, ext4 /dev/sdh : /dev/sdh1 other, ext4 /dev/sdi : /dev/sdi1 other /dev/sdi2 other /dev/sdi3 other /dev/sdi4 other [ubuntu@burnupi22 ceph-deploy]$ ps -ef | grep ceph root 33802 1 0 20:39 pts/0 00:00:00 /usr/bin/ceph-mon -i burnupi22 --pid-file /var/run/ceph/mon.burnupi22.pid -c /etc/ceph/ceph.conf ubuntu 34736 33317 0 20:43 pts/0 00:00:00 grep --color=auto ceph [ubuntu@burnupi22 ceph-deploy]$ ./ceph-deploy osd create burnupi22:sde --zap-disk [ubuntu@burnupi22 ceph-deploy]$ ./ceph-deploy disk list burnupi22 /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sdb : /dev/sdb1 other, ext4 /dev/sdc : /dev/sdc1 ceph data, prepared, cluster ceph, osd.0, journal /dev/sdc2 /dev/sdc2 ceph journal, for /dev/sdc1 /dev/sdd : /dev/sdd1 ceph data, prepared, cluster ceph, osd.1, journal /dev/sdd2 /dev/sdd2 ceph journal, for /dev/sdd1 /dev/sde : /dev/sde1 ceph data, prepared, cluster ceph, osd.2, journal /dev/sde2 /dev/sde2 ceph journal, for /dev/sde1 /dev/sdf : /dev/sdf1 other, xfs /dev/sdg : /dev/sdg1 other, ext4 /dev/sdh : /dev/sdh1 other, ext4 /dev/sdi : /dev/sdi1 other /dev/sdi2 other /dev/sdi3 other /dev/sdi4 other [ubuntu@burnupi22 ceph-deploy]$ ps -ef | grep ceph root 33802 1 0 20:39 pts/0 00:00:00 /usr/bin/ceph-mon -i burnupi22 --pid-file /var/run/ceph/mon.burnupi22.pid -c /etc/ceph/ceph.conf ubuntu 35216 33317 0 20:45 pts/0 00:00:00 grep --color=auto ceph [ubuntu@burnupi22 ceph-deploy]$ cat ceph.conf [global] fsid = 79aceedd-142d-4edb-9916-02d976b17376 mon initial members = burnupi22 mon host = 10.214.134.8 auth supported = cephx osd journal size = 1024 filestore xattr use omap = true osd crush chooseleaf type = 0
Actions