Actions
Bug #7222
closedinit-ceph failed when cluster created by "mkcephfs" and start osd firstly and
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When the OSD is not in OSDMap and run "init-ceph" to start, it will report error when execute "ceph osd crush create-or-move ..."
commit 177e2ab1cad325b875249a514bc1774ff32e0074 delete "||:" at the last of shell statement which result in it.
Updated by Sage Weil over 10 years ago
when is the osd not in the osdmap? this is normally done by ceph-disk activate...
we somewhat deliberately error out there if the crush move fails so that you don't have a disk come up in the wrong rack/host
Updated by Haomai Wang over 10 years ago
Yes, ceph-disk can do it.
But if use "mkcephfs", it may failed. Maybe it better to do more things in "mkcephfs" side
Updated by Haomai Wang about 10 years ago
mkcephfs won't add osd id to osdmap. So if cluster created by mkcephfs, it need to add "osd crush update on start = 0" to ceph.conf
Updated by Haomai Wang about 10 years ago
- Subject changed from init-ceph failed when start osd firstly to init-ceph failed when cluster created by "mkcephfs" and start osd firstly and
Actions