Project

General

Profile

Actions

Bug #47495

closed

rook: 'ceph orch device ls' does not list devices

Added by Varsha Rao over 3 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
mgr/rook
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[root@rook-ceph-tools-78cdfd976c-xx8jc /]# ceph status
  cluster:
    id:     5bc3f612-6d40-4675-8750-cb882d84ed22
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum a (age 12m)
    mgr: a(active, since 9m)
    osd: 1 osds: 1 up (since 10m), 1 in (since 10m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   1.0 GiB used, 24 GiB / 25 GiB avail
    pgs:     1 active+clean

[root@rook-ceph-tools-78cdfd976c-xx8jc /]# rook version
rook: v1.4.0-alpha.0.324.g7e664437-dirty
go: go1.15.2
[root@rook-ceph-tools-78cdfd976c-xx8jc /]# ceph version
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

[root@rook-ceph-tools-78cdfd976c-xx8jc /]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME          STATUS  REWEIGHT  PRI-AFF
-1         0.02460  root default                                
-3         0.02460      host minikube                           
 0    hdd  0.02460          osd.0          up   1.00000  1.00000

[root@rook-ceph-tools-78cdfd976c-xx8jc /]# ceph orch device ls

Deployed daemons

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS      RESTARTS   AGE
kube-system   coredns-f9fd979d6-vwnnp                         1/1     Running     0          21m
kube-system   etcd-minikube                                   1/1     Running     0          21m
kube-system   kube-apiserver-minikube                         1/1     Running     0          21m
kube-system   kube-controller-manager-minikube                1/1     Running     0          21m
kube-system   kube-proxy-bl8hn                                1/1     Running     0          21m
kube-system   kube-scheduler-minikube                         1/1     Running     0          21m
kube-system   storage-provisioner                             1/1     Running     1          21m
rook-ceph     csi-cephfsplugin-9kqv5                          3/3     Running     0          17m
rook-ceph     csi-cephfsplugin-provisioner-7468b6bf56-4t2vt   0/6     Pending     0          17m
rook-ceph     csi-cephfsplugin-provisioner-7468b6bf56-4xmhw   6/6     Running     0          17m
rook-ceph     csi-rbdplugin-pbhmp                             3/3     Running     0          17m
rook-ceph     csi-rbdplugin-provisioner-77459cc496-7rnrr      0/6     Pending     0          17m
rook-ceph     csi-rbdplugin-provisioner-77459cc496-zn96b      6/6     Running     0          17m
rook-ceph     rook-ceph-mgr-a-67c7cb5fdd-2667x                1/1     Running     0          16m
rook-ceph     rook-ceph-mon-a-5c44859f85-cm2zm                1/1     Running     0          17m
rook-ceph     rook-ceph-operator-86756d44-ktths               1/1     Running     0          19m
rook-ceph     rook-ceph-osd-0-674bbf8d6c-qxbbf                1/1     Running     0          15m
rook-ceph     rook-ceph-osd-prepare-minikube-8x5d4            0/1     Completed   0          16m
rook-ceph     rook-ceph-tools-78cdfd976c-xx8jc                1/1     Running     0          18m
rook-ceph     rook-discover-mlrzs                             1/1     Running     0          19m

No exception or error reported in mgr log

Actions #1

Updated by Sage Weil over 2 years ago

  • Status changed from New to Resolved
  • Pull request ID set to 42318
Actions

Also available in: Atom PDF