Project

General

Profile

Actions

Bug #49571

closed

cephadm: same OSD one two host + daemon_id not unique

Added by Sebastian Wagner about 3 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

OSD.1 on both node06 and node01. The node01 is not managing a drive and is not working. Any idea on how to delete this? If I issue the command you suggested earlier, would it delete both osds?

[21:07:06] <stephen> ceph orch ps --daemon_type osd --daemon_id 1
[21:07:06] <stephen> NAME   HOST    STATUS         REFRESHED  AGE  VERSION    IMAGE NAME                   IMAGE ID      CONTAINER ID  
[21:07:06] <stephen> osd.1  node01  error          7m ago     3w   <unknown>  docker.io/ceph/ceph:v15.2.9  <unknown>     <unknown>     
[21:07:06] <stephen> osd.1  node06  running (23h)  86s ago    2w   15.2.8     docker.io/ceph/ceph:v15      5553b0cb212c  1f8700d14c7f  

Actions #1

Updated by Sebastian Wagner over 2 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 43095
Actions #2

Updated by Sebastian Wagner over 2 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #3

Updated by Sebastian Wagner about 2 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF