Bug #46541
opencephadm: OSD is marked as unmanaged in cephadm deployed cluster
0%
Description
I followed the cephadm instructions (https://docs.ceph.com/docs/master/cephadm/install/#deploy-osds) on deploying a new Ceph cluster.
When I run:
ceph orch ls
I see the following entry:
osd.1 17/0 5m ago - <unmanaged> docker.io/ceph/ceph:v15 54fa7e66fb03
The 17 in 17/0 relates to the total number of OSDs.
I do not understand why OSD's are marked as unmanaged since I created them according to the instructions using:
ceph orch daemon add osd *<host>*:*<device-path>*
I can issue daemon start/stop/restart commands to the OSD without any issue, `ceph orch ps` lists all OSD as running, `ceph -s` reports the cluster as healthy and `ceph osd tree` displays all OSDs as UP.
I'm starting to believe that this is a UI (CLI) bug and in reality there is no issue?
CLUSTER DETAILS:
I deployed a bare metal cluster with:- 3 hardware nodes (n1, n2, n3)
- 3 monitor daemons (deployed on: n1, n2, n3)
- 2 manager daemons (deployed on: n1 and n2)
n1 has 5 OSDs, each backed by 1 brand new SSD
n2 and n3 each has 6 OSD's, each OSD backed by 1 brand new SSD
A total of 17 SSDs
All hardware is new and is of the same make/model.
Smartctl doesn't report any errors on any of the SSD's.