Project

General

Profile

Actions

Bug #54470

open

Can't mount NFS exports with IP access restrictions when the Ingress daemon is deployed in front of the NFS service

Added by Francesco Pantano about 2 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
cephadm/nfs
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Context

The new Manila's CephFS NFS driver is supposed to rely on the ceph nfs orchestrator exposed commands to interact with the cluster and add/remove/update share access in the driver.
When an ingress daemon is deployed and used on top of several (3 instances in this case) ceph-nfs backend instances, we experienced a failure in mounting the created export through
the frontend VIP.

Ceph NFS backends

----
networks:
- 172.16.11.0/24
placement:
hosts:
- oc0-controller-0
- oc0-controller-1
- oc0-controller-2
service_id: nfs
service_name: default
service_type: nfs

Ingress daemon

---
placement:
hosts:
- oc0-controller-0
- oc0-controller-1
- oc0-controller-2
service_id: ingress
service_name: ingress.ingress
service_type: ingress
spec:
backend_service: nfs.nfs
frontend_port: 20490
monitor_port: 8999
virtual_interface_networks: 172.16.11.0/24
virtual_ip: 172.16.11.159

[ceph: root@oc0-controller-0 /]# ceph nfs cluster ls
nfs

[ceph: root@oc0-controller-0 /]# ceph nfs cluster info nfs {
"nfs": {
"virtual_ip": "172.16.11.159",
"backend": [ {
"hostname": "oc0-controller-0",
"ip": "172.16.11.29",
"port": 2049
}, {
"hostname": "oc0-controller-1",
"ip": "172.16.11.35",
"port": 2049
}, {
"hostname": "oc0-controller-2",
"ip": "172.16.11.180",
"port": 2049
}
],
"port": 20490,
"monitor_port": 8999
}
}

[ceph: root@oc0-controller-0 /]# ceph fs volume ls
[ {
"name": "cephfs"
}
]

[ceph: root@oc0-controller-0 /]# ceph fs status cephfs
cephfs - 0 clients ======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active mds.oc0-controller-0.jvxwyy Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
manila_metadata metadata 64.0k 189G
manila_data data 0 126G
STANDBY MDS
mds.oc0-controller-2.raphea
mds.oc0-controller-1.gyfjey
MDS version: ceph version 16.2.7-464-g7bda236f (7bda236ffb358262a6f354a1dde3c16ed7f586b2) pacific (stable)
[ceph: root@oc0-controller-0 /]# ceph fs subvolume create cephfs cephfs-subvol

[ceph: root@oc0-controller-0 /]# ceph fs subvolume getpath cephfs cephfs-subvol
/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf

CREATE AN EXPORT ======

[ceph: root@oc0-controller-0 /]# ceph nfs export create cephfs nfs `ceph fs subvolume getpath cephfs cephfs-subvol` cephfs `ceph fs subvolume getpath cephfs cephfs-subvol` --client_addr 192.168.24.7 {
"bind": "/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf",
"fs": "cephfs",
"path": "/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf",
"cluster": "nfs",
"mode": "none"
}

Get the export

[ceph: root@oc0-controller-0 /]# ceph nfs export ls nfs
[
"/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf"
]

[ceph: root@oc0-controller-0 /]# ceph nfs export get nfs "/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf" {
"export_id": 1,
"path": "/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf",
"cluster_id": "nfs",
"pseudo": "/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf",
"access_type": "none",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs.1",
"fs_name": "cephfs"
},
"clients": [ {
"addresses": [
"172.16.11.60"
],
"access_type": "rw",
"squash": "none"
}
]
}

Mount the volume

[root@oc0-ceph-0 ~]# mount.nfs4 -o port=12049 172.16.11.29:/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf /mnt/nfs

[root@oc0-ceph-0 ~]# ls /mnt/nfs/
file00 file02 file04 file06 file08 file10 file01 file03 file05 file07 file09

[root@oc0-ceph-0 ~]# umount /mnt/nfs

[root@oc0-ceph-0 ~]# mount.nfs4 -o port=20490 172.16.11.159:/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf /mnt/nfs
mount.nfs4: mounting 172.16.11.159:/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf failed, reason given by server: No such file or directory

From the client added to the export, we're able to mount the share bypassing haproxy and going directly to the backend IP:PORT, while it fails using the frontend VIP.


Related issues 1 (0 open1 closed)

Related to Orchestrator - Feature #55663: cephadm/nfs: enable cephadm to provide one virtual per ganesha instance of the NFS serviceResolvedAdam King

Actions
Actions #1

Updated by Ramana Raja almost 2 years ago

[root@oc0-ceph-0 ~]# mount.nfs4 -o port=20490 172.16.11.159:/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf /mnt/nfs

mount.nfs4: mounting 172.16.11.159:/volumes/_nogroup/cephfs-subvol/c83b1844-0638-48f4-aff2-1c8f762eacaf failed, reason given by server: No such file or directory

The NFS Ganesha server only allows export access to IP addr 172.16.11.60 mentioned in the export definition. Even though the client with IP 172.16.11.60 is trying to mount the export, it's being refused access by the NFS server. I think this is because the NFS server sees the request coming from HA proxy's bind IP (virtual IP, 172.16.11.159) and not from the client itself. In order for the HA proxy server to present the client IP to the backend server, it needs to be run in a "full transparent proxy" mode, https://github.com/haproxy/haproxy/blob/v2.3.0/doc/configuration.txt#L9554 . I'm not sure how if we can configure haproxy to run in this mode in front of backend NFS Ganesha servers.

Actions #2

Updated by Ramana Raja almost 2 years ago

  • Subject changed from Can't mount exports when the Ingress daemon is deployed to Can't mount NFS exports with IP access restrictions when the Ingress daemon is deployed in front of the NFS service
Actions #3

Updated by Ramana Raja almost 2 years ago

  • Related to Feature #55663: cephadm/nfs: enable cephadm to provide one virtual per ganesha instance of the NFS service added
Actions #4

Updated by Ramana Raja almost 2 years ago

  • Related to Feature #55663: cephadm/nfs: enable cephadm to provide one virtual per ganesha instance of the NFS service added
Actions #5

Updated by Ramana Raja almost 2 years ago

  • Related to deleted (Feature #55663: cephadm/nfs: enable cephadm to provide one virtual per ganesha instance of the NFS service)
Actions #6

Updated by Ilya Dryomov almost 2 years ago

  • Target version deleted (v16.2.8)
Actions

Also available in: Atom PDF