Project

General

Profile

Bug #3723

ceph osd down command reports incorrectly

Added by Ken Franklin about 11 years ago. Updated almost 11 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

issuing the command: "sudo ceph osd down 2" reports osd.2 is already down but sudo ceph osd stat reports all are up.

ubuntu@burnupi56:/mnt/node3$ sudo ceph osd down 2
osd.2 is already down.
ubuntu@burnupi56:/mnt/node3$ sudo ceph osd stat
e174: 3 osds: 3 up, 3 in

The test environment is a mixed 3 node cluster with the primary and 2nd node on argonaut. 3rd node on bobtail.

[global]
auth supported = cephx
[osd]
osd journal size = 1000
filestore_xattr_use_omap = true
debug osd = 20
osd min pg log entries = 10

[osd.0]
host = burnupi56
[osd.1]
host = burnupi57
[osd.2]
host = burnupi61

[mon.a]
host = burnupi56
mon addr = 10.214.136.18:6789
[mon.b]
host = burnupi57
mon addr = 10.214.136.16:6789
[mon.c]
host = burnupi61
mon addr = 10.214.136.8:6789

[mds.a]
host = burnupi56

[client.radosgw.gateway]
host = burnupi56
keyring = /etc/ceph/keyring.radosgw.gateway
rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1

History

#1 Updated by Tamilarasi muthamizhan about 11 years ago

  • Source changed from Development to Q/A

similarly for "ceph osd in" command as well

ubuntu@burnupi06:/etc/ceph$ sudo ceph osd in 2 -k /etc/ceph/ceph.keyring
osd.2 is already in.

ubuntu@burnupi07:/etc/ceph$ sudo ceph osd dump
max_osd 7
osd.1 up out weight 0 up_from 103 up_thru 111 down_at 93 last_clean_interval [62,92) 10.214.133.8:6800/53808 10.214.133.8:6801/53808 10.214.133.8:6802/53808 exists,up

99712443-5217-454b-9d5d-3121725947b1
osd.2 up out weight 0 up_from 104 up_thru 134 down_at 93 last_clean_interval [64,92) 10.214.133.8:6803/54042 10.214.133.8:6804/54042 10.214.133.8:6805/54042 exists,up

85b6cafe-c985-4206-a7e2-d7c476301e14
osd.3 up in weight 1 up_from 107 up_thru 135 down_at 106 last_clean_interval [21,99) 10.214.134.38:6800/53517 10.214.134.38:6801/53517 10.214.134.38:6802/53517 exists,up

46e4c15b-2dbc-44d0-8a23-c9df503b2e9a
osd.4 up out weight 0 up_from 109 up_thru 121 down_at 108 last_clean_interval [21,99) 10.214.134.38:6803/53766 10.214.134.38:6804/53766 10.214.134.38:6805/53766 exists,up

13a3c459-e473-4554-80be-1d041f937fc5
osd.5 up in weight 1 up_from 101 up_thru 132 down_at 100 last_clean_interval [18,98) 10.214.134.36:6808/51167 10.214.134.36:6809/51167 10.214.134.36:6810/51167 exists,up

8fe012a4-a03f-43f9-b5d0-03cd1aa251c1
osd.6 up out weight 0 up_from 101 up_thru 119 down_at 100 last_clean_interval [12,98) 10.214.134.36:6811/55118 10.214.134.36:6812/55118 10.214.134.36:6813/55118 exists,up

007cb7bb-d0ff-4547-b4fa-20f83f00877c

#2 Updated by Sage Weil almost 11 years ago

  • Status changed from New to Can't reproduce

Also available in: Atom PDF