Project

General

Profile

Actions

Bug #46862

closed

cephadm: nfs ganesha client mount is read only

Added by Dimitri Savineau over 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
cephadm/nfs
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When trying to mount a ganesha share via nfs on a client node, the mount command is successful but the share is in RO mode (even if we set the rw mount option).

# ceph --version
ceph version 16.0.0-4201-g2ccd47711b (2ccd47711b73b09069e6964730532fc3cf6e7382) pacific (dev)

Ganesha was deployed using the following commands:

# ceph osd pool create nfs-ganesha
# ceph orch apply nfs foo nfs-ganesha nfs-ns

And the state of the ganesha service/daemon looks fine

# ceph orch ls --service_type nfs 
NAME     RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME                                      IMAGE ID      
nfs.foo      1/1  5m ago     19h  count:1    docker.io/ceph/daemon-base:latest-master-devel  4275043b0890
# ceph orch ps --service_name nfs.foo
NAME                HOST        STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                                      IMAGE ID      CONTAINER ID  
nfs.foo.odjodqkndf  odjodqkndf  running (13m)  6m ago     19h  3.3      docker.io/ceph/daemon-base:latest-master-devel  4275043b0890  59ab18b8dcb9

Using a CentOS 8 node as a client

192.168.100.12 is ganesha IP address
192.168.100.9 is the client IP address

# mount -t nfs -o nfsvers=4,proto=tcp,rw 192.168.100.12:/ /mnt/
# ls -lh /mnt/
total 0
# touch /mnt/foo
touch: cannot touch '/mnt/foo': Read-only file system
# mount|grep nfs4
192.168.100.12:/ on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.9,local_lock=none,addr=192.168.100.12)

In the ganesha pool we can see the conf-nfs.foo file present in the right namespace. But that file is empty (I don't know if this is expected or not)

# rados -p nfs-ganesha -N nfs-ns ls
grace
rec-0000000000000004:nfs.foo.odjodqkndf
rec-0000000000000002:nfs.foo.odjodqkndf
conf-nfs.foo

Am I missing something ?

Actions #1

Updated by Varsha Rao over 3 years ago

  • Status changed from New to Need More Info

Dimitri Savineau wrote:

When trying to mount a ganesha share via nfs on a client node, the mount command is successful but the share is in RO mode (even if we set the rw mount option).

It seems the export creation was not successful. Please share the ganesha config file.

[...]

In the ganesha pool we can see the conf-nfs.foo file present in the right namespace. But that file is empty (I don't know if this is expected or not)

[...]

Am I missing something ?

conf-nfs.foo file is expected to be empty as the orch commands don't create exports, it just deploys ganesha daemons. 'ceph nfs' interface does both cluster deployment and cephfs export creation. See https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports

Actions #2

Updated by Dimitri Savineau over 3 years ago

It seems the export creation was not successful. Please share the ganesha config file.

The configuration is basically the same than the template from [1]

# This file is generated by cephadm.
NFS_CORE_PARAM {
        Enable_NLM = false;
        Enable_RQUOTA = false;
        Protocols = 4;
}

MDCACHE {
        Dir_Chunk = 0;
}

EXPORT_DEFAULTS {
        Attr_Expiration_Time = 0;
}

NFSv4 {
        Delegations = false;
        RecoveryBackend = 'rados_cluster';
        Minor_Versions = 1, 2;
}

RADOS_KV {
        UserId = "nfs.foo.odjodqkndf";
        nodeid = "nfs.foo.odjodqkndf";
        pool = "nfs-ganesha";
        namespace = "nfs-ns";
}

RADOS_URLS {
        UserId = "nfs.foo.odjodqkndf";
        watch_url = "rados://nfs-ganesha/nfs-ns/conf-nfs.foo";
}

%url    rados://nfs-ganesha/nfs-ns/conf-nfs.foo

'ceph nfs' interface does both cluster deployment and cephfs export creation. See https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports

Does that mean the cephadm documentation is wrong about ganesha deployment ? [2]

Also the documentation you mentioned is referring cephfs which means it requires mds daemons deployed. Does that mean ganesha support via cephadm is missing this part ? Because there's no mds daemons deployed when adding the ganesha service to the cluster.

[1] https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/templates/services/nfs/ganesha.conf.j2
[2] https://docs.ceph.com/docs/master/cephadm/install/#deploying-nfs-ganesha

Actions #3

Updated by Varsha Rao over 3 years ago

The configuration is basically the same than the template from [1]

The export block is missing and it is expected with cephadm ganesha deployment. See my comments below.

'ceph nfs' interface does both cluster deployment and cephfs export creation. See https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports

Does that mean the cephadm documentation is wrong about ganesha deployment ? [2]

Here deployment does not mean it will create export too. It just deploys the ganesha daemon with minimal configuration. I think we need to clarify about it in the document.

Also the documentation you mentioned is referring cephfs which means it requires mds daemons deployed. Does that mean ganesha support via cephadm is missing this part ? Because there's no mds daemons deployed when adding the ganesha service to the cluster.

No, it is not missing as we expect cephadm to just deploy ganesha daemons with minimal configuration. The 'ceph nfs export' interface is intended to manage rgw and cephfs exports. Currently only cephfs export is supported and mds daemons need to be deployed separately.

Actions #4

Updated by Sebastian Wagner over 3 years ago

  • Related to Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt' added
Actions #5

Updated by Sebastian Wagner over 3 years ago

  • Related to deleted (Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt')
Actions #6

Updated by Juan Miguel Olmo Martínez about 3 years ago

  • Assignee set to Varsha Rao
Actions #7

Updated by Sebastian Wagner almost 3 years ago

  • Category set to cephadm/nfs
Actions #8

Updated by Varsha Rao almost 3 years ago

  • Status changed from Need More Info to Resolved

Latest cephadm doc clarifies about the export management. https://docs.ceph.com/en/latest/cephadm/nfs/#deploy-cephadm-nfs-ganesha

Actions

Also available in: Atom PDF