Actions
Bug #58514
openceph nfs export commands broken with rook orchestrator
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
Yes
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The following shell script fragment was working under ceph 16.2.10:
ceph mgr module enable rook ceph mgr module enable nfs ceph orch set backend rook ceph nfs export rm jhome /jhome ceph nfs export create cephfs jhome /jhome jhome ceph nfs export rm lsstdata /lsstdata ceph nfs export create cephfs lsstdata /lsstdata lsstdata ceph nfs export rm project /project ceph nfs export create cephfs project /project project ceph nfs export rm scratch /scratch ceph nfs export create cephfs scratch /scratch scratch
All ceph nfs export commands seem to be broken under 17.2345 (I did not test .[012]). This result has been replicated on two different k8s clusters.
$ ceph version ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable) $ ceph nfs export create cephfs jhome /jhome jhome Error ENOENT: Cluster does not exist command terminated with exit code 2 $ ceph nfs export rm jhome /jhome Error ENOENT: Cluster does not exist command terminated with exit code 2
Additionally, since the update to 17.2.x, the dashboard tab for "NFS" now shows that there are no NFS exports (it was populated under 16.2.10).
Updated by Joshua Hoblitt about 1 year ago
Is this PR related? https://github.com/ceph/ceph/pull/50057
Updated by Deepika Upadhyay 12 months ago
bash-4.4$ ceph mgr module enable nfs module 'nfs' is already enabled bash-4.4$ ceph orch set backend rook bash-4.4$ ceph mgr module enable rook module 'rook' is already enabled bash-4.4$ ceph mgr module enable nfs module 'nfs' is already enabled bash-4.4$ ceph orch set backend rook^C bash-4.4$ ceph nfs cluster create nfs-cephfs NFS Cluster Created Successfully bash-4.4$ ceph fs subvolume create filesystem primary-subvolume --size 1024 Error ENOENT: FS 'filesystem' not found bash-4.4$ ceph fs subvolume create my-fs primary-subvolume --size 1024 Error ENOENT: FS 'my-fs' not found bash-4.4$ ceph fs ls name: myfs, metadata pool: myfs-metadata, data pools: [myfs-replicated ] bash-4.4$ ceph fs subvolume create myfs primary-subvolume --size 1024 bash-4.4$ ceph fs subvolume getpath myfs primary-subvolume /volumes/_nogroup/primary-subvolume/2dafd538-fe5e-4618-a51f-496ae1c5da0e bash-4.4$ ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /ceph/secondary --fsname myfs --path=/volumes/_nogroup/primary-subvolume/2dafd538-fe5e-4618-a51f-496ae1c5da0e { "bind": "/ceph/secondary", "fs": "myfs", "path": "/volumes/_nogroup/primary-subvolume/2dafd538-fe5e-4618-a51f-496ae1c5da0e", "cluster": "nfs-cephfs", "mode": "RW" } bash-4.4$ ceph nfs export info nfs-cephfs /ceph/secondary { "export_id": 1, "path": "/volumes/_nogroup/primary-subvolume/2dafd538-fe5e-4618-a51f-496ae1c5da0e", "cluster_id": "nfs-cephfs", "pseudo": "/ceph/secondary", "access_type": "RW", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs.nfs-cephfs.1", "fs_name": "myfs" }, "clients": [] }
I am not sure, but this alternative way works, is this how it should be working any PR that did syntax change?
Actions