Actions
Bug #49571
closedcephadm: same OSD one two host + daemon_id not unique
Status:
Resolved
Priority:
High
Assignee:
-
Category:
cephadm
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
OSD.1 on both node06 and node01. The node01 is not managing a drive and is not working. Any idea on how to delete this? If I issue the command you suggested earlier, would it delete both osds?
[21:07:06] <stephen> ceph orch ps --daemon_type osd --daemon_id 1 [21:07:06] <stephen> NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID [21:07:06] <stephen> osd.1 node01 error 7m ago 3w <unknown> docker.io/ceph/ceph:v15.2.9 <unknown> <unknown> [21:07:06] <stephen> osd.1 node06 running (23h) 86s ago 2w 15.2.8 docker.io/ceph/ceph:v15 5553b0cb212c 1f8700d14c7f
Updated by Sebastian Wagner over 2 years ago
- Status changed from New to Fix Under Review
- Pull request ID set to 43095
Updated by Sebastian Wagner over 2 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Sebastian Wagner about 2 years ago
- Status changed from Pending Backport to Resolved
Actions