Project

General

Profile

Actions

Bug #61183

closed

mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but doesn't actually reflect the changes

Added by Dhairya Parmar 12 months ago. Updated 11 months ago.

Status:
Closed
Priority:
Normal
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/nfs
Labels (FS):
NFS-cluster
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

While setting NFS cluster config for the first time using -i option it does say "NFS-Ganesha Config Added Successfully" but doesn't actually have any impact on the config, let's see this using an example

#create a cluster
sh-4.4$ ceph nfs cluster create nfs-cephfs "rook-ceph-tools-9b7967b5d-8jbxb" 
NFS Cluster Created Successfully

# config for NFS cluster
sh-4.4$ cat config.conf 
EXPORT {
  Export_Id = 123;
  Transports = TCP;
  Path = /;
  Pseudo = /ceph;
  Protocols = 4;
  Access_Type = PERMISSIONS;
  Attr_Expiration_Time = 0;
  Squash = None;
  FSAL {
    Name = CEPH;
    Filesystem = "myfs";
    User_Id = "nfstest04";
    Secret_Access_Key = "AQAajmJkzjYxHBAAYg3IHS/2cyS60rQlGtlRvA==";
  }
}

# create a user
sh-4.4$ ceph auth get-or-create client.nfsuser01 mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=nfs-cephfs, allow rw tag cephfs data=myfs' mds 'allow rw path=/'
[client.nfsuser01]
    key = AQBFWmNkE0TdERAA40X8yYJbIVaKf+B/AADp1w==

# make changes to the config to use newly created user
sh-4.4$ vi config.conf 
sh-4.4$ cat config.conf 
EXPORT {
  Export_Id = 111;
  Transports = TCP;
  Path = /;
  Pseudo = /ceph;
  Protocols = 4;
  Access_Type = PERMISSIONS;
  Attr_Expiration_Time = 0;
  Squash = None;
  FSAL {
    Name = CEPH;
    Filesystem = "myfs";
    User_Id = "nfsuser01";
    Secret_Access_Key = "AQBFWmNkE0TdERAA40X8yYJbIVaKf+B/AADp1w==";
  }
}

# create export
sh-4.4$ ceph nfs export create cephfs nfs-cephfs /ceph myfs --path=/ 
{
    "bind": "/ceph",
    "fs": "myfs",
    "path": "/",
    "cluster": "nfs-cephfs",
    "mode": "RW" 
}

# says config applied successfully
sh-4.4$ ceph nfs cluster config set nfs-cephfs -i config.conf
NFS-Ganesha Config Added Successfully (Manual Restart of NFS PODS required)
sh-4.4$ exit
exit

# deleting pod is equivalent to restarting (had a conversation with rook team,
# it has to be done this way because there is no straightforward method to restart a pod)
[dparmar@dparmar-basemachine ~]$ kubectl get po
NAME                                            READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-grsf5                          2/2     Running     0          3d22h
csi-cephfsplugin-provisioner-84cc595b78-86blr   5/5     Running     0          3d22h
csi-rbdplugin-gtglc                             2/2     Running     0          3d22h
csi-rbdplugin-provisioner-6f6b6b8cd6-59vc2      5/5     Running     0          3d22h
rook-ceph-mds-myfs-a-6bd585f54-4qbmj            1/1     Running     0          3d22h
rook-ceph-mds-myfs-b-84fcf69674-6g9fh           1/1     Running     0          3d22h
rook-ceph-mgr-a-64689c8cfb-5qsrm                1/1     Running     0          3d22h
rook-ceph-mon-a-6c7f7cd67-nbm2s                 1/1     Running     0          3d22h
rook-ceph-nfs-my-nfs-a-6f554dbf9c-gccbt         2/2     Running     0          3d22h
rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-cn4ll     2/2     Running     0          2m49s
rook-ceph-operator-897dbfdc8-gmvd4              1/1     Running     0          3d22h
rook-ceph-osd-0-95686f7c8-5c9fb                 1/1     Running     0          3d22h
rook-ceph-osd-prepare-minikube-89gjp            0/1     Completed   0          3d22h
rook-ceph-tools-9b7967b5d-8jbxb                 1/1     Running     0          3d22h
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-tools-9b7967b5d-8jbxb
pod "rook-ceph-tools-9b7967b5d-8jbxb" deleted
[dparmar@dparmar-basemachine ~]$ kubectl get po
NAME                                            READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-grsf5                          2/2     Running     0          3d22h
csi-cephfsplugin-provisioner-84cc595b78-86blr   5/5     Running     0          3d22h
csi-rbdplugin-gtglc                             2/2     Running     0          3d22h
csi-rbdplugin-provisioner-6f6b6b8cd6-59vc2      5/5     Running     0          3d22h
rook-ceph-mds-myfs-a-6bd585f54-4qbmj            1/1     Running     0          3d22h
rook-ceph-mds-myfs-b-84fcf69674-6g9fh           1/1     Running     0          3d22h
rook-ceph-mgr-a-64689c8cfb-5qsrm                1/1     Running     0          3d22h
rook-ceph-mon-a-6c7f7cd67-nbm2s                 1/1     Running     0          3d22h
rook-ceph-nfs-my-nfs-a-6f554dbf9c-gccbt         2/2     Running     0          3d22h
rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-cn4ll     2/2     Running     0          3m34s
rook-ceph-operator-897dbfdc8-gmvd4              1/1     Running     0          3d22h
rook-ceph-osd-0-95686f7c8-5c9fb                 1/1     Running     0          3d22h
rook-ceph-osd-prepare-minikube-89gjp            0/1     Completed   0          3d22h
rook-ceph-tools-9b7967b5d-jqgbs                 1/1     Running     0          34s
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-nfs-my-nfs-a-6f554dbf9c-gccbt
pod "rook-ceph-nfs-my-nfs-a-6f554dbf9c-gccbt" deleted
[dparmar@dparmar-basemachine ~]$ kubectl delete pod rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-cn4ll
pod "rook-ceph-nfs-nfs-cephfs-a-66fd7c5885-cn4ll" deleted
[dparmar@dparmar-basemachine ~]$ kubectl exec -it rook-ceph-tools-9b7967b5d-jqgbs -- sh

# list clusters
sh-4.4$ ceph nfs cluster ls
nfs-cephfs

# no changes reflected even though it succeeded
sh-4.4$ ceph nfs export get nfs-cephfs /ceph
{
  "export_id": 1,
  "path": "/",
  "cluster_id": "nfs-cephfs",
  "pseudo": "/ceph",
  "access_type": "RW",
  "squash": "none",
  "security_label": true,
  "protocols": [
    4
  ],
  "transports": [
    "TCP" 
  ],
  "fsal": {
    "name": "CEPH",
    "user_id": "nfs.nfs-cephfs.1",
    "fs_name": "myfs" 
  },
  "clients": []
}

# same here
sh-4.4$ ceph nfs export ls nfs-cephfs --detailed
[
  {
    "export_id": 1,
    "path": "/",
    "cluster_id": "nfs-cephfs",
    "pseudo": "/ceph",
    "access_type": "RW",
    "squash": "none",
    "security_label": true,
    "protocols": [
      4
    ],
    "transports": [
      "TCP" 
    ],
    "fsal": {
      "name": "CEPH",
      "user_id": "nfs.nfs-cephfs.1",
      "fs_name": "myfs" 
    },
    "clients": []
  }
]

Related issues 2 (0 open2 closed)

Related to CephFS - Bug #59463: mgr/nfs: Setting NFS export config using -i option is not workingClosedDhairya Parmar

Actions
Related to CephFS - Bug #61184: mgr/nfs: setting config using external file gets overiddenClosedDhairya Parmar

Actions
Actions #1

Updated by Dhairya Parmar 12 months ago

  • Related to Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working added
Actions #2

Updated by Dhairya Parmar 12 months ago

  • Related to Bug #61184: mgr/nfs: setting config using external file gets overidden added
Actions #3

Updated by Milind Changire 11 months ago

  • Assignee set to Dhairya Parmar
Actions #4

Updated by Dhairya Parmar 11 months ago

  • Status changed from New to Closed

`ceph nfs cluster` and `ceph nfs export` are two different interfaces, adding custom export block to cluster config using `ceph nfs cluster config set <cluster_id> -i <config_file>` does create an export but is not managed by `ceph nfs export` interface.

More info on this, PTAL https://tracker.ceph.com/issues/59463#note-9

Actions

Also available in: Atom PDF