Project

General

Profile

Actions

Bug #7222

closed

init-ceph failed when cluster created by "mkcephfs" and start osd firstly and

Added by Haomai Wang over 10 years ago. Updated about 10 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When the OSD is not in OSDMap and run "init-ceph" to start, it will report error when execute "ceph osd crush create-or-move ..."

commit 177e2ab1cad325b875249a514bc1774ff32e0074 delete "||:" at the last of shell statement which result in it.

Actions #1

Updated by Sage Weil over 10 years ago

when is the osd not in the osdmap? this is normally done by ceph-disk activate...

we somewhat deliberately error out there if the crush move fails so that you don't have a disk come up in the wrong rack/host

Actions #2

Updated by Haomai Wang over 10 years ago

Yes, ceph-disk can do it.

But if use "mkcephfs", it may failed. Maybe it better to do more things in "mkcephfs" side

Actions #3

Updated by Haomai Wang about 10 years ago

mkcephfs won't add osd id to osdmap. So if cluster created by mkcephfs, it need to add "osd crush update on start = 0" to ceph.conf

Actions #4

Updated by Haomai Wang about 10 years ago

  • Status changed from 12 to Won't Fix
Actions #5

Updated by Haomai Wang about 10 years ago

  • Subject changed from init-ceph failed when start osd firstly to init-ceph failed when cluster created by "mkcephfs" and start osd firstly and
Actions

Also available in: Atom PDF