Project

General

Profile

Bug #47387

CephFS - Feature #47587: pybind/mgr/nfs: add Rook support

rook: 'ceph orch ps' does not list daemons correctly

Added by Varsha Rao 4 months ago. Updated 3 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
mgr/rook
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Daemons are deployed successfully

$ kubectl -n rook-ceph get pod
NAME                                            READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-bc88n                          3/3     Running     0          16m
csi-cephfsplugin-provisioner-7468b6bf56-j5mr7   0/6     Pending     0          16m
csi-cephfsplugin-provisioner-7468b6bf56-tl7cf   6/6     Running     0          16m
csi-rbdplugin-dmjmq                             3/3     Running     0          16m
csi-rbdplugin-provisioner-77459cc496-lcvnw      0/6     Pending     0          16m
csi-rbdplugin-provisioner-77459cc496-qs76q      6/6     Running     0          16m
rook-ceph-mds-myfs-a-5db75d4994-xskq2           1/1     Running     0          10m
rook-ceph-mds-myfs-b-b466cbbd4-zg5k6            1/1     Running     0          10m
rook-ceph-mgr-a-57fb6fd9d-srdwm                 1/1     Running     0          15m
rook-ceph-mon-a-68bd64c695-4q4hr                1/1     Running     0          15m
rook-ceph-nfs-my-nfs-a-5fc554b554-2pfrz         2/2     Running     0          5m31s
rook-ceph-nfs-my-nfs-b-7969d949dc-2nrdm         2/2     Running     0          5m30s
rook-ceph-nfs-my-nfs-c-5cb59d6d84-nksm4         2/2     Running     0          5m25s
rook-ceph-operator-86756d44-tqrd4               1/1     Running     0          28m
rook-ceph-osd-0-85866ff5b8-gw8xt                1/1     Running     0          14m
rook-ceph-osd-prepare-minikube-6gmph            0/1     Completed   0          15m
rook-ceph-tools-78cdfd976c-l7d7l                1/1     Running     0          15m
rook-discover-4hcgf                             1/1     Running     0          27m

Deployed daemons are not listed

[root@rook-ceph-tools-78cdfd976c-l7d7l /]# ceph -s
  cluster:
    id:     992aa918-7be7-49df-8c3c-e6c8b7d023ba
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum a (age 14m)
    mgr: a(active, since 10m)
    mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
    osd: 1 osds: 1 up (since 13m), 1 in (since 13m)

  task status:
    scrub status:
        mds.myfs-a: idle
        mds.myfs-b: idle

  data:
    pools:   3 pools, 65 pgs
    objects: 29 objects, 2.2 KiB
    usage:   1.0 GiB used, 24 GiB / 25 GiB avail
    pgs:     65 active+clean

  io:
    client:   1.1 KiB/s rd, 1 op/s rd, 0 op/s wr

[root@rook-ceph-tools-78cdfd976c-l7d7l /]# ceph orch ps
No daemons reported

[root@rook-ceph-tools-78cdfd976c-l7d7l /]# rook version
rook: v1.4.0-alpha.0.304.geee2151
go: go1.13.8
[root@rook-ceph-tools-78cdfd976c-l7d7l /]# ceph version
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

Related issues

Related to Orchestrator - Bug #47337: rook: 'ceph orch ls' fails Closed

History

#1 Updated by Sebastian Wagner 4 months ago

but `ceph orch ls` or `ceph orch status` works?

#2 Updated by Varsha Rao 4 months ago

Sebastian Wagner wrote:

but `ceph orch ls` or `ceph orch status` works?

'ceph orch ls' doesn't work: https://tracker.ceph.com/issues/47337
'ceph orch status' works

#3 Updated by Sebastian Wagner 4 months ago

  • Related to Bug #47337: rook: 'ceph orch ls' fails added

#4 Updated by Varsha Rao 4 months ago

  • Related to Feature #47490: Integration of dashboard with volume/nfs module added

#5 Updated by Kiefer Chang 4 months ago

  • Status changed from New to Fix Under Review
  • Assignee set to Kiefer Chang
  • Pull request ID set to 37206

#6 Updated by Patrick Donnelly 4 months ago

  • Parent task set to #47587

#7 Updated by Patrick Donnelly 4 months ago

  • Related to deleted (Feature #47490: Integration of dashboard with volume/nfs module)

#8 Updated by Kiefer Chang 3 months ago

  • Status changed from Fix Under Review to Resolved

The backport of this issue is done in https://github.com/ceph/ceph/pull/37436

Also available in: Atom PDF