Project

General

Profile

Bug #57711

Exports not updated correctly when using ceph_argparse

Added by Victoria Martinez de la Cruz 2 months ago. Updated 2 months ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When issuing the command nfs export apply through the ceph_argparse.py library and there is an export already created for the subvolume, the execution finishes with the following error

ERROR oslo_messaging.rpc.server [None req-8acab01d-0820-4e27-99cd-e56e0cfb8dbf demo None] Exception during message handling: manila.exception.ShareBackendException: json_command failed - prefix=nfs export apply, argdict={'nfs_cluster_id': 'cephfs', 'format': 'json'} - exception message: [errno -22] No daemons exist under service name "nfs.None". View currently running services using "ceph orch ls".
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/drivers/cephfs/driver.py", line 203, in rados_command
ERROR oslo_messaging.rpc.server raise rados.Error(outs, ret)
ERROR oslo_messaging.rpc.server rados.Error: [errno -22] No daemons exist under service name "nfs.None". View currently running services using "ceph orch ls"
ERROR oslo_messaging.rpc.server
ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
ERROR oslo_messaging.rpc.server
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
ERROR oslo_messaging.rpc.server File "/usr/local/lib/python3.8/dist-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 220, in wrapped
ERROR oslo_messaging.rpc.server return f(self, *args, **kwargs)
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/utils.py", line 579, in wrapper
ERROR oslo_messaging.rpc.server return func(self, *args, **kwargs)
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 4004, in update_access
ERROR oslo_messaging.rpc.server self.update_access_for_instances(context, [share_instance_id],
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/manager.py", line 4018, in update_access_for_instances
ERROR oslo_messaging.rpc.server self.access_helper.update_access_rules(
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/access.py", line 299, in update_access_rules
ERROR oslo_messaging.rpc.server self._update_access_rules(context, share_instance_id,
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/access.py", line 336, in _update_access_rules
ERROR oslo_messaging.rpc.server driver_rule_updates = self._update_rules_through_share_driver(
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/access.py", line 401, in _update_rules_through_share_driver
ERROR oslo_messaging.rpc.server driver_rule_updates = self.driver.update_access(
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/drivers/cephfs/driver.py", line 538, in update_access
ERROR oslo_messaging.rpc.server return self.protocol_helper.update_access(
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/drivers/cephfs/driver.py", line 1293, in update_access
ERROR oslo_messaging.rpc.server self._allow_access(share, clients)
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/drivers/cephfs/driver.py", line 1235, in _allow_access
ERROR oslo_messaging.rpc.server output = rados_command(self.rados_client,
ERROR oslo_messaging.rpc.server File "/opt/stack/manila/manila/share/drivers/cephfs/driver.py", line 215, in rados_command
ERROR oslo_messaging.rpc.server raise exception.ShareBackendException(msg)
ERROR oslo_messaging.rpc.server manila.exception.ShareBackendException: json_command failed - prefix=nfs export apply, argdict={'nfs_cluster_id': 'cephfs', 'format': 'json'} - exception message: [errno -22] No daemons exist under service name "nfs.None". View currently running services using "ceph orch ls".
ERROR oslo_messaging.rpc.server

Context
-------

Using ceph_argparse from within the CephFS driver in OpenStack manila. This command requires passing both an argdict specifying the nfs_cluster_id but also a bytes object (JSON object) specifying the export information.

DEBUG manila.share.drivers.cephfs.driver [None req-8acab01d-0820-4e27-99cd-e56e0cfb8dbf demo None] Invoking ceph_argparse.json_command - rados_client=<rados.Rados object at 0x7fd6f4d1b5e0>, target=('mon-mgr',), prefix='nfs export apply', argdict={'nfs_cluster_id': 'cephfs', 'format': 'json'}, inbuf=b'{"path": "/volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8", "nfs_cluster_id": "cephfs", "pseudo": "/volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8", "squash": "none", "security_label": true, "protocols": [4], "fsal": {"name": "CEPH", "fs_name": "cephfs"}, "clients": [{"access_type": "rw", "addresses": ["10.0.0.10", "10.0.0.15"], "squash": "none"}]}', timeout=10. {{(pid=3769271) rados_command /opt/stack/manila/manila/share/drivers/cephfs/driver.py:189}}

The output of this operation is the trace above (ERROR oslo_messaging.rpc.server manila.exception.ShareBackendException: json_command failed - prefix=nfs export apply, argdict={'nfs_cluster_id': 'cephfs', 'format': 'json'} - exception message: [errno -22] No daemons exist under service name "nfs.None". View currently running services using "ceph orch ls".)

Manually checking the cluster shows the following:

stack@quincy:~/devstack$ sudo ceph --id=manila nfs export ls cephfs
[
"/volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8"
]

This shows that the NFS service is up and running and that the cluster exists

Ceph and NFS have been deployed with cephadm.

Executing this command (ceph --id=manila nfs export apply cephfs -i export.json) from the CLI works

Precisely, trying to update the export with an updated json file:

stack@quincy:~/devstack$ sudo ceph --id=manila nfs export apply cephfs -i export-test-5-updated.json
Updated export /volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8

stack@quincy:~/devstack$ sudo ceph --id=manila nfs export info cephfs /volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8 {
"export_id": 1,
"path": "/volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8",
"cluster_id": "cephfs",
"pseudo": "/volumes/_nogroup/ba15fa20-80de-410b-8a3a-8b7bf095ce54/70cf4ce4-ad79-44ca-be95-35dcd948dcc8",
"access_type": "RO",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.cephfs.1",
"fs_name": "cephfs"
},
"clients": [ {
"addresses": [
"10.0.0.10",
"10.0.0.15"
],
"access_type": "rw",
"squash": "none"
}
]
}

Expected result
---------------

We should be able to use ceph_argparse to update exports of subvolumes

Actual result
-------------

The "ceph nfs export apply <nfs_cluster_id> -i export.json" command issued through ceph_argparse fails with an exception instead of updating the desired export.

History

#1 Updated by Ramana Raja 2 months ago

  • Status changed from New to Need More Info

Can you try replacing the 'nfs_cluster_id' with 'cluster_id' in argdict as mentioned in the docs,
https://docs.ceph.com/en/quincy/mgr/nfs/#create-or-update-export-via-json-specification

#2 Updated by Victoria Martinez de la Cruz 2 months ago

Thanks Ramana, that was the issue, fixed it on the driver. We can close this tracker.

#3 Updated by Ramana Raja 2 months ago

  • Status changed from Need More Info to Rejected

Also available in: Atom PDF