Actions
Bug #42089
closedmgr/dashboard: No export is created for nfs-ganesha
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
With minikube-rook using cluster-testing.yaml for ceph and nfs cluster deployment, no export is created.
minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6
rook master branch
commit: 91b8a96fb6f6b2e8f15c48630f726880f91d1c6a
[varsha@localhost ~]$ ./run-backend-rook-api-request.sh POST /api/nfs-ganesha/export "$(cat ~/export.json)" METHOD: POST URL: https://192.168.39.115:30405/api/nfs-ganesha/export DATA: { "cluster_id": "mynfs", "path": "/volumes/_nogroup/myfs-subvol", "fsal": {"name": "CEPH", "user_id":"admin", "fs_name": "myfs", "sec_label_xattr": null}, "pseudo": "/cephfs-sub", "tag": null, "access_type": "RW", "squash": "no_root_squash", "protocols": [4], "transports": ["TCP"], "security_label": true, "daemons": ["mynfs.a", "mynfs.b"], "clients": [] } [varsha@localhost ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-6x7fz 1/1 Running 0 33m kube-system coredns-5644d7b6d9-c24x7 1/1 Running 0 33m kube-system etcd-minikube 1/1 Running 0 31m kube-system kube-addon-manager-minikube 1/1 Running 0 32m kube-system kube-apiserver-minikube 1/1 Running 0 32m kube-system kube-controller-manager-minikube 1/1 Running 0 32m kube-system kube-proxy-k5rf9 1/1 Running 0 33m kube-system kube-scheduler-minikube 1/1 Running 0 32m kube-system storage-provisioner 1/1 Running 0 33m rook-ceph csi-cephfsplugin-bt5sw 3/3 Running 0 18m rook-ceph csi-cephfsplugin-provisioner-75c965db4f-bpbvr 4/4 Running 0 18m rook-ceph csi-cephfsplugin-provisioner-75c965db4f-xxlnc 4/4 Running 0 18m rook-ceph csi-rbdplugin-provisioner-56cbc4d585-tjsxc 5/5 Running 0 18m rook-ceph csi-rbdplugin-provisioner-56cbc4d585-wg9wz 5/5 Running 0 18m rook-ceph csi-rbdplugin-pwpb6 3/3 Running 0 18m rook-ceph rook-ceph-mds-myfs-a-f76fb8c89-rb6k5 1/1 Running 0 3m50s rook-ceph rook-ceph-mds-myfs-b-549fb947c-f8rrf 1/1 Running 0 3m50s rook-ceph rook-ceph-mgr-a-85457df48d-7tb5c 1/1 Running 0 15m rook-ceph rook-ceph-mon-a-59c5765d4d-xrczr 1/1 Running 0 15m rook-ceph rook-ceph-mon-b-c7587d4c7-l52p2 1/1 Running 0 5m30s rook-ceph rook-ceph-mon-c-7974885c94-hckrp 1/1 Running 0 5m15s rook-ceph rook-ceph-nfs-mynfs-a-554f84698-r8h9v 2/2 Running 0 7m4s rook-ceph rook-ceph-nfs-mynfs-b-55f7cb77cc-j67st 2/2 Running 0 6m57s rook-ceph rook-ceph-operator-fdfbcc5c5-vkkj8 1/1 Running 0 31m rook-ceph rook-ceph-osd-0-67dbfdbb5b-8657c 1/1 Running 0 15m rook-ceph rook-ceph-osd-prepare-minikube-zvmtw 0/1 Completed 0 4m43s rook-ceph rook-ceph-tools-856c5bc6b4-nffvc 1/1 Running 0 9m48s rook-ceph rook-discover-zv8k9 1/1 Running 0 23m [root@minikube /]# ceph -s cluster: id: 047ef89f-693d-46e9-aba6-7ca471ea6799 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 16m) mgr: a(active, since 16m) mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay osd: 1 osds: 1 up (since 26m), 1 in (since 26m) data: pools: 3 pools, 28 pgs objects: 27 objects, 2.2 KiB usage: 5.1 GiB used, 11 GiB / 16 GiB avail pgs: 28 active+clean io: client: 1.4 KiB/s rd, 2 op/s rd, 0 op/s wr
Files
Actions