Project

General

Profile

Documentation #45977

Updated by Sebastian Wagner almost 4 years ago

First of all, yes I know, nfs under ceph orch is still under development but I couldn't find any information about this issue: 

 
 I Was testing nfs deployment with the orchestrator. Done with my testing, I tried to remove it again. 
 That did not work or I did it wrong, because the daemon itself is not there anymore but I still can see the orchestrator is trying to deploy it. 
 Deletion was tried with "ceph orch daemon rm nfs.cephnfs.node04" and I also tried to unmanage it with "ceph orch apply nfs cephnfs cephfs host04 --unmanaged" (because you have to enter the whole command unlike in the mon or mgr context) but it's still there. 

 Here the debug log: 

 <pre> 
 2020-06-11T13:44:19.582889+0200 mgr.node01.qloqjv [DBG] Applying service nfs.cephnfs spec 
 2020-06-11T13:44:19.583013+0200 mgr.node01.qloqjv [DBG] Provided hosts: [HostPlacementSpec(hostname='node01', network='', name='')] 
 2020-06-11T13:44:19.583080+0200 mgr.node01.qloqjv [DBG] hosts with daemons: set() 
 2020-06-11T13:44:19.583153+0200 mgr.node01.qloqjv [INF] Saving service nfs.cephnfs spec with placement node01 
 2020-06-11T13:44:19.645643+0200 mgr.node01.qloqjv [DBG] Placing nfs.cephnfs.node01 on host node01 
 2020-06-11T13:44:19.645906+0200 mgr.node01.qloqjv [DBG] SpecStore: find spec for nfs.cephnfs returned: [NFSServiceSpec({'placement': PlacementSpec(hosts=[HostPlacementSpec(hostname='node01', network='', name='')]), 'service_type': 'nfs', 'service_id': 'cephnfs', 'unmanaged': False, 'pool': 'cephfs', 'namespace': 'clouds'})] 
 2020-06-11T13:44:19.646095+0200 mgr.node01.qloqjv [INF] Create keyring: client.nfs.cephnfs.node01 
 2020-06-11T13:44:19.647348+0200 mgr.node01.qloqjv [DBG] mon_command: 'auth get-or-create' -> 0 in 0.001s 
 2020-06-11T13:44:19.647458+0200 mgr.node01.qloqjv [INF] Updating keyring caps: client.nfs.cephnfs.node01 
 2020-06-11T13:44:19.712477+0200 mgr.node01.qloqjv [DBG] mon_command: 'auth caps' -> 0 in 0.065s 
 2020-06-11T13:44:19.713129+0200 mgr.node01.qloqjv [WRN] Failed to apply nfs.cephnfs spec NFSServiceSpec({'placement': PlacementSpec(hosts=[HostPlacementSpec(hostname='node01', network='', name='')]), 'service_type': 'nfs', 'service_id': 'cephnfs', 'unmanaged': False, 'pool': 'cephfs', 'namespace': 'clouds'}): [errno 2] RADOS object not found (error opening pool 'b'cephfs'') 
 </pre> 

 Also if I do the Export "ceph orch ls --export > orch.yaml" there I can see my failed configurations, so those never got deleted. 
 But manually edit the yaml and re-deploy it with that does not work either, because the orchestrator is not overwriting the config, instead it's applying it. 


 As a Workaround, is there a Way to delete everything regarding nfs in the orchestrator?

Back