Actions
Bug #19950
closedceph-deploy doesn't automatically create ceph-<number> in osd directory
Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-deploy
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Hi,
I used tro try to upgrade my existing Ceph cluster from Hammer to Jewel (10.2.7) in CentOS 7.3.1611.
As I used the command,
ceph-deploy --overwrite-conf osd create nodetwo:sdc:/dev/sdb
where sdc is my data, and sdb is my journal and sdd (other drive) for additional osd data.
But seems doesn't create on the nodetwo,
/var/lib/ceph/osd/ceph-3 and /var/lib/ceph/osd/ceph-4
when you run ceph osd tree, osd.3 is down; where ceph-3 is my osd.3 and ceph-4 osd.4.
As per checking the logs,as tried to enable the bluestore but not proceed as existing using filestore. I don't know if possible to have multiple backend osd?
2017-05-17 09:27:13.868044 7fb5a7403800 0 setuser_match_path /var/lib/ceph/osd/ceph-4 owned by 167:167. set uid:gid to 167:167 (ceph:ceph) 2017-05-17 09:27:13.868052 7fb5a7403800 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 401914 2017-05-17 09:27:13.868153 7fb5a7403800 -1 ^[[0;31m ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-4: (2) No such file or directory^[[0m 2017-05-17 09:27:34.148588 7f4f8f890800 -1 WARNING: the following dangerous and experimental features are enabled: bluestore 2017-05-17 09:27:34.148654 7f4f8f890800 -1 WARNING: the following dangerous and experimental features are enabled: bluestore 2017-05-17 09:27:34.148656 7f4f8f890800 0 setuser_match_path /var/lib/ceph/osd/ceph-4 owned by 167:167. set uid:gid to 167:167 (ceph:ceph) 2017-05-17 09:27:34.148663 7f4f8f890800 0 ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185), process ceph-osd, pid 402225 2017-05-17 09:27:34.148799 7f4f8f890800 -1 ^[[0;31m ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-4: (2) No such file or directory^[[0m 2017-05-17 09:27:54.401055 7f161bd29800 -1 WARNING: the following dangerous and experimental features are enabled: bluestore 2017-05-17 09:27:54.401063 7f161bd29800 0 setuser_match_path /var/lib/ceph/osd/ceph-4 owned by 167:167. set uid:gid to 167:167 (ceph:ceph)
It only work after I created those directories ceph-4 and ceph-3 mentioned in this example and change the ownership to ceph.
systemctl restart ceph-osd@4.service
Then it will now added the said OSDs and link the ssd partition as the journal.
Regards,
Mario
Updated by Mario Codeniera almost 7 years ago
Additional info..
when I remove / comment this line in the ceph.conf
setuser match path = /var/lib/ceph/$type/$cluster-$id
it will create those directories mentioned.
Actions