Project

General

Profile

Actions

Bug #12774

closed

ceph add new osd on existing host, status keep down

Added by Jeddy Liu over 8 years ago. Updated about 7 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
OSDMap
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
kcephfs
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Description of problem:
health cluster have 3 hosts, one mon and one osd per host, cluster status is 248 active + clean, after add a new osd on existing host,cluster status changed 64 active + remapped 184 active + clean, the new osd process is running, but status in cluster is down

Version:
system: ubuntu 14.04 LTS
kernel: 3.13.0-24-generic
ceph: ceph version 0.80.9

How reproduce:
alway

Step to Reproduce:
1. create new osd on existing host manaully
2. sudo ceph -s check status

Actually result:
new osd stauts keep down


Files

cluster_osd_tree.png (18.1 KB) cluster_osd_tree.png ceph osd tree Jeddy Liu, 08/25/2015 03:21 AM
cluster_osd_tree.png (18.1 KB) cluster_osd_tree.png ceph -s Jeddy Liu, 08/25/2015 03:26 AM
osd_3_running.png (3.79 KB) osd_3_running.png Jeddy Liu, 08/25/2015 03:26 AM
Actions

Also available in: Atom PDF