Project

General

Profile

Actions

Bug #22346

closed

OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap

Added by Graham Allan over 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Administration/Usability
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
ceph cli
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ust updated a fairly long-lived (originally firefly) cluster from jewel to luminous 12.2.2.

One of the issues I see is a new health warning:

OSD_ORPHAN 3 osds exist in the crush map but not in the osdmap
osd.2 exists in crush map but not in osdmap
osd.14 exists in crush map but not in osdmap
osd.19 exists in crush map but not in osdmap

Seemed reasonable enough, these low-numbered OSDs were on long-decommissioned hardware. I thought I had removed them completely though, and it seems I had:

  1. ceph osd crush ls osd.2
    Error ENOENT: node 'osd.2' does not exist
  2. ceph osd crush remove osd.2
    device 'osd.2' does not appear in the crush map

so I wonder where it's getting this warning from, and if it's erroneous, how can I clear it?

Dump of osd map attached...


Files

osdmap.dump.20171207 (105 KB) osdmap.dump.20171207 "ceph osd dump" output Graham Allan, 12/07/2017 10:38 PM
osdmap.20171207 (127 KB) osdmap.20171207 "ceph osd getmap" Graham Allan, 12/07/2017 11:23 PM
Actions

Also available in: Atom PDF