Actions
Bug #61184
closedmgr/nfs: setting config using external file gets overidden
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/nfs
Labels (FS):
NFS-cluster
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Followed similar steps as mentioned in https://tracker.ceph.com/issues/59463
# create cluster
sh-4.4$ ceph nfs cluster create nfs-cephfs
NFS Cluster Created Successfully
#create user
sh-4.4$ ceph auth get-or-create client.nfsuser02 mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=nfs-cephfs, allow rw tag cephfs data=myfs' mds 'allow rw path=/'
[client.nfsuser02]
key = AQDbYWNkZag4AxAAFPiksL4w1EIkdpCU9mwphw==
# modify config with new user
sh-4.4$ vi config.conf
sh-4.4$ cat config.conf
EXPORT {
Export_Id = 110;
Transports = TCP;
Path = /;
Pseudo = /ceph/;
Protocols = 4;
Access_Type = RO;
Attr_Expiration_Time = 0;
Squash = None;
FSAL {
Name = CEPH;
Filesystem = "myfs";
User_Id = "nfsuser02";
Secret_Access_Key = "AQDbYWNkZag4AxAAFPiksL4w1EIkdpCU9mwphw==";
}
}
# create export
sh-4.4$ ceph nfs export create cephfs nfs-cephfs /ceph01 myfs --path=/
{
"bind": "/ceph01",
"fs": "myfs",
"path": "/",
"cluster": "nfs-cephfs",
"mode": "RW"
}
make changes
sh-4.4$ ceph nfs cluster config set nfs-cephfs -i config.conf
NFS-Ganesha Config Added Successfully (Manual Restart of NFS PODS required)
# cluster config is reflected while export is not
sh-4.4$ ceph nfs cluster config get nfs-cephfs
EXPORT {
Export_Id = 110;
Transports = TCP;
Path = /;
Pseudo = /ceph/;
Protocols = 4;
Access_Type = RO;
Attr_Expiration_Time = 0;
Squash = None;
FSAL {
Name = CEPH;
Filesystem = "myfs";
User_Id = "nfsuser02";
Secret_Access_Key = "AQDbYWNkZag4AxAAFPiksL4w1EIkdpCU9mwphw==";
}
}
sh-4.4$ ceph nfs export ls nfs-cephfs --detailed
[
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
"pseudo": "/ceph01",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs-cephfs.1",
"fs_name": "myfs"
},
"clients": []
}
]
# restarting pods to see if it reflects changes
sh-4.4$ exit
exit
[dparmar@dparmar-basemachine ~]$ kubectl get po
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-grsf5 2/2 Running 0 3d23h
csi-cephfsplugin-provisioner-84cc595b78-86blr 5/5 Running 0 3d23h
csi-rbdplugin-gtglc 2/2 Running 0 3d23h
csi-rbdplugin-provisioner-6f6b6b8cd6-59vc2 5/5 Running 0 3d23h
rook-ceph-mds-myfs-a-6bd585f54-4qbmj 1/1 Running 0 3d23h
rook-ceph-mds-myfs-b-84fcf69674-6g9fh 1/1 Running 0 3d23h
rook-ceph-mgr-a-64689c8cfb-5qsrm 1/1 Running 0 3d23h
rook-ceph-mon-a-6c7f7cd67-nbm2s 1/1 Running 0 3d23h
rook-ceph-nfs-my-nfs-a-6f554dbf9c-k4k46 2/2 Running 0 33m
rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-r45vs 2/2 Running 0 4m34s
rook-ceph-operator-897dbfdc8-gmvd4 1/1 Running 0 3d23h
rook-ceph-osd-0-95686f7c8-5c9fb 1/1 Running 0 3d23h
rook-ceph-osd-prepare-minikube-89gjp 0/1 Completed 0 3d23h
rook-ceph-tools-9b7967b5d-jqgbs 1/1 Running 0 34m
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-nfs-my-nfs-a-6f554dbf9c-k4k46
pod "rook-ceph-nfs-my-nfs-a-6f554dbf9c-k4k46" deleted
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-r45vs
pod "rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-r45vs" deleted
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-tools-9b7967b5d-jqgbs
pod "rook-ceph-tools-9b7967b5d-jqgbs" deleted
[dparmar@dparmar-basemachine ~]$ kubectl get po
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-grsf5 2/2 Running 0 3d23h
csi-cephfsplugin-provisioner-84cc595b78-86blr 5/5 Running 0 3d23h
csi-rbdplugin-gtglc 2/2 Running 0 3d23h
csi-rbdplugin-provisioner-6f6b6b8cd6-59vc2 5/5 Running 0 3d23h
rook-ceph-mds-myfs-a-6bd585f54-4qbmj 1/1 Running 0 3d23h
rook-ceph-mds-myfs-b-84fcf69674-6g9fh 1/1 Running 0 3d23h
rook-ceph-mgr-a-64689c8cfb-5qsrm 1/1 Running 0 3d23h
rook-ceph-mon-a-6c7f7cd67-nbm2s 1/1 Running 0 3d23h
rook-ceph-nfs-my-nfs-a-6f554dbf9c-6shbc 2/2 Running 0 3m2s
rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-fmk87 2/2 Running 0 2m54s
rook-ceph-operator-897dbfdc8-gmvd4 1/1 Running 0 3d23h
rook-ceph-osd-0-95686f7c8-5c9fb 1/1 Running 0 3d23h
rook-ceph-osd-prepare-minikube-89gjp 0/1 Completed 0 3d23h
rook-ceph-tools-9b7967b5d-5pdl6 1/1 Running 0 2m42s
[dparmar@dparmar-basemachine ~]$ kubectl exec -it rook-ceph-tools-9b7967b5d-5pdl6 -- sh
# same as before restart i.e. exports haven't got the changes from config
sh-4.4$ ceph nfs export ls nfs-cephfs --detailed
[
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
"pseudo": "/ceph01",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs-cephfs.1",
"fs_name": "myfs"
},
"clients": []
}
]
sh-4.4$ ceph nfs export get nfs-cephfs /ceph01
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
"pseudo": "/ceph01",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs-cephfs.1",
"fs_name": "myfs"
},
"clients": []
}
# while the cluster config does show the config values
sh-4.4$ ceph nfs cluster config get nfs-cephfs
EXPORT {
Export_Id = 110;
Transports = TCP;
Path = /;
Pseudo = /ceph/;
Protocols = 4;
Access_Type = RO;
Attr_Expiration_Time = 0;
Squash = None;
FSAL {
Name = CEPH;
Filesystem = "myfs";
User_Id = "nfsuser02";
Secret_Access_Key = "AQDbYWNkZag4AxAAFPiksL4w1EIkdpCU9mwphw==";
}
}
Updated by Dhairya Parmar 12 months ago
- Related to Bug #61183: mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but doesn't actually reflect the changes added
Updated by Dhairya Parmar 12 months ago
- Related to Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working added
Updated by Dhairya Parmar 11 months ago
- Status changed from New to Closed
Identical to https://tracker.ceph.com/issues/61183 and thus the same rationale: `ceph nfs cluster` and `ceph nfs export` are two different interfaces, adding custom export block to cluster config using `ceph nfs cluster config set <cluster_id> -i <config_file>` does create an export but is not managed by `ceph nfs export` interface.
Actions