Actions
Bug #1546
closed/var/run/ceph is not present during mkpcehfs: AdminSocketConfigObs::init: failed
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:
0%
Spent time:
Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I was just doing a mkcephfs on my cluster and saw:
Scanning for Btrfs filesystems ** WARNING: Ceph is still under development. Any feedback can be directed ** ** at ceph-devel@vger.kernel.org or http://ceph.newdream.net/. ** 2011-09-16 16:57:49.331939 7fd21c8be760 AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/osd.23.asok': error 2: No such file or directory 2011-09-16 16:57:49.332069 7fd21c8be760 AdminSocketConfigObs: failed to start AdminSocket 2011-09-16 16:57:49.830540 7fd21c8be760 created object store /var/lib/ceph/osd.23 journal /dev/data/journal3 for osd23 fsid f0ffb5f9-af1b-aa7e-106f-1e039807ca43 creating private key for osd.23 keyring /etc/ceph/keyring.osd.23 creating /etc/ceph/keyring.osd.23 collecting osd.23 key
I just updated all my nodes and rebooted them, so /var/run/ceph was not present.
To me it seems that it's no problem that the admin_socket can't be created during a mkcephfs, but new users will probably think something went wrong.
I was thinking about having mkcephfs creating the directory, but it's also a option that the OSD skips the admin socket when only creating a new FS.
Actions