Project

General

Profile

Bug #45962

Updated by Nathan Cutler almost 4 years ago

After running the following commands: 

 <pre> 
     master: ++ ceph osd pool create rbd 
     master: pool 'rbd' created 
     master: ++ ceph osd pool application enable rbd nfs 
     master: enabled application 'nfs' on pool 'rbd' 
     master: ++ ceph orch apply nfs default rbd nfs '--placement=1 master' 
     master: Scheduled nfs.default update... 
 </pre> 

 I can see the running Ganesha daemon in "cephadm ls", "ceph orch ls", and "ceph orch ps": 

 <pre> 
 master:~ # ceph orch ls --service-type nfs 
 NAME           RUNNING    REFRESHED    AGE    PLACEMENT         IMAGE NAME                                                              IMAGE ID       
 nfs.default        1/1    60s ago      66s    count:1 master    registry.suse.de/devel/storage/7.0/containers/ses/7/ceph/ceph:latest    4c50c7cc0a70 
 </pre> 

 <pre> 
 master:~ # ceph orch ps | grep nfs 
 nfs.default.master                   master    running (86s)    82s ago      86s    3.2           registry.suse.de/devel/storage/7.0/containers/ses/7/ceph/ceph:latest    4c50c7cc0a70    1f3f00aa62b0 
 </pre> 

 I can also see there is a systemd service running: 

 <pre> 
 master:~ # systemctl | grep -i nfs 
 ceph-f23dcdee-aa86-11ea-bada-5254008eec9c@nfs.default.master.service                                               loaded active running     Ceph nfs.default.master for f23dcdee-aa86-11ea-bada-5254008eec9c         
 </pre> 

 However, where I can *not* see it is in "ceph status": 

 <pre> 
 master:~ # ceph status 
   cluster: 
     id:       f23dcdee-aa86-11ea-bada-5254008eec9c 
     health: HEALTH_OK 
 
   services: 
     mon: 1 daemons, quorum master (age 51m) 
     mgr: master.ymwbtj(active, since 50m) 
     mds: myfs:1 {0=myfs.master.bulgmq=up:active} 
     osd: 4 osds: 4 up (since 49m), 4 in (since 49m) 
     rgw: 1 daemon active (default.default.master.kotrul) 
 
   task status: 
     scrub status: 
         mds.myfs.master.bulgmq: idle 
 
   data: 
     pools:     8 pools, 201 pgs 
     objects: 221 objects, 7.6 KiB 
     usage:     4.1 GiB used, 28 GiB / 32 GiB avail 
     pgs:       201 active+clean 
 
   io: 
     client:     170 B/s rd, 0 op/s rd, 0 op/s wr 
 </pre> 

 I believe there should be a line like this in the "services:" section: 

 <pre> 
 nfs: 1 daemon active (nfs.default.master) 
 </pre> 

 but there is no such line.

Back