Project

General

Profile

Actions

Bug #3723

closed

ceph osd down command reports incorrectly

Added by Ken Franklin over 11 years ago. Updated almost 11 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

issuing the command: "sudo ceph osd down 2" reports osd.2 is already down but sudo ceph osd stat reports all are up.

ubuntu@burnupi56:/mnt/node3$ sudo ceph osd down 2
osd.2 is already down.
ubuntu@burnupi56:/mnt/node3$ sudo ceph osd stat
e174: 3 osds: 3 up, 3 in

The test environment is a mixed 3 node cluster with the primary and 2nd node on argonaut. 3rd node on bobtail.

[global]
auth supported = cephx
[osd]
osd journal size = 1000
filestore_xattr_use_omap = true
debug osd = 20
osd min pg log entries = 10

[osd.0]
host = burnupi56
[osd.1]
host = burnupi57
[osd.2]
host = burnupi61

[mon.a]
host = burnupi56
mon addr = 10.214.136.18:6789
[mon.b]
host = burnupi57
mon addr = 10.214.136.16:6789
[mon.c]
host = burnupi61
mon addr = 10.214.136.8:6789

[mds.a]
host = burnupi56

[client.radosgw.gateway]
host = burnupi56
keyring = /etc/ceph/keyring.radosgw.gateway
rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1

Actions

Also available in: Atom PDF