Project

General

Profile

Actions

Bug #46541

open

cephadm: OSD is marked as unmanaged in cephadm deployed cluster

Added by Gregor Krmelj almost 4 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
cephadm/services
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I followed the cephadm instructions (https://docs.ceph.com/docs/master/cephadm/install/#deploy-osds) on deploying a new Ceph cluster.

When I run:

ceph orch ls

I see the following entry:

osd.1                         17/0  5m ago     -    <unmanaged>  docker.io/ceph/ceph:v15     54fa7e66fb03  

The 17 in 17/0 relates to the total number of OSDs.

I do not understand why OSD's are marked as unmanaged since I created them according to the instructions using:

ceph orch daemon add osd *<host>*:*<device-path>*

I can issue daemon start/stop/restart commands to the OSD without any issue, `ceph orch ps` lists all OSD as running, `ceph -s` reports the cluster as healthy and `ceph osd tree` displays all OSDs as UP.

I'm starting to believe that this is a UI (CLI) bug and in reality there is no issue?

CLUSTER DETAILS:

I deployed a bare metal cluster with:
  • 3 hardware nodes (n1, n2, n3)
  • 3 monitor daemons (deployed on: n1, n2, n3)
  • 2 manager daemons (deployed on: n1 and n2)

n1 has 5 OSDs, each backed by 1 brand new SSD
n2 and n3 each has 6 OSD's, each OSD backed by 1 brand new SSD
A total of 17 SSDs

All hardware is new and is of the same make/model.
Smartctl doesn't report any errors on any of the SSD's.

Actions

Also available in: Atom PDF