Feature #47490
openIntegration of dashboard with volume/nfs module
0%
Description
Currently, there are two ways to create exports with mgr/volume/nfs module and
dashboard. Both use the same code12 with modification to create exports.
Recently, there was a meeting3 to discuss integration of dashboard with
volume/nfs module. A number of todo items were identified.
Below provides a brief description of export creation workflow:
1) mgr/volume/nfs module [4]- It was introduced in octopus.
- It automates the pool and cluster creation with "ceph nfs cluster create" command.
- Currently using 'cephadm' as backend. In future 'rook' will also be supported.
- Default exports can be created with
'ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]'.
Otherwise `ceph nfs cluster config set <clusterid> -i <config_file>` command
can be used to create user defined exports. Even modify ganesha configuration. - RGW exports are not supported [5]. We need someone to help with it.
- Exports can be listed, fetched and deleted. But cannot be modified currently [6].
- Only NFSv4 is supported. It provides better cache management, parallelism,
compound operations, and lease based locks than previous versions.
- The pool and nfs cluster needs to be created explicitly.
- Also requires the
"ceph dashboard set-ganesha-clusters-rados-pool-namespace <pool_name>[/<namespace>]"
command to be used before exports can be created. And following options need to
be specified: cluster id, daemons, path, pseudo path, access type, squash,
security label, protocols [3, 4], transport [udp, tcp], cephfs user id, cephfs name. - It supports both cephfs and rgw exports.
- Exports can be modified, listed, fetched and deleted.
- Available from nautilus.
We would like to create a common code base for it and eventually go in a
direction where the dashboard may use the volumes/nfs plugin for configuring
NFS clusters.
These are the issues we identified in our meeting:
- Difference in user workflow between volume/nfs and dashboard.
- rgw exports need to be supported in volume/nfs module.
- Dashboard does not want to depend on the orchestrator in future for fetching
cluster pool and namespace. - Dashboard creates config object per daemon containing export object rados url.
- In cephadm all daemons within the cluster watch a single config object. This
config object contains rados url for export objects.
rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$svc (export config) (user defined config) +----------+ +----------+ +----------+ +---------+ | | | | | | | | | export-1 | | export-2 | | export-3 | | export | | | | | | | | | +----+-----+ +----+-----+ +-----+----+ +----+----+ ^ ^ ^ ^ | | | | +---------------+----------------+-----------------+ %url | | +--------+--------+ | | rados://$pool/$namespace/conf-nfs.$svc | conf+nfs.$svc | (common config) | | +--------+--------+ ^ | watch_url | +----------------------------------------------+ | | | | | | RADOS +----------------------------------------------------------------------------------+ | | | CONTAINER watch_url | watch_url | watch_url | | | | +--------+-------+ +--------+-------+ +-------+--------+ | | | | | | /etc/ganesha/ganesha.conf | nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config) | | | | | | +----------------+ +----------------+ +----------------+
[1] https://github.com/ceph/ceph/blob/master/src/pybind/mgr/volumes/fs/nfs.py
[2] https://github.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/services/ganesha.py
[3] https://pad.ceph.com/p/dashboard-nfs
[4] https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports
[5] https://tracker.ceph.com/issues/47172
[6] https://tracker.ceph.com/issues/45746
[7] https://docs.ceph.com/docs/master/mgr/dashboard/#nfs-ganesha-management