Project

General

Profile

Actions

Bug #42089

closed

mgr/dashboard: No export is created for nfs-ganesha

Added by Varsha Rao over 4 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
Component - NFS
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

With minikube-rook using cluster-testing.yaml for ceph and nfs cluster deployment, no export is created.

minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6

rook master branch
commit: 91b8a96fb6f6b2e8f15c48630f726880f91d1c6a

[varsha@localhost ~]$ ./run-backend-rook-api-request.sh POST /api/nfs-ganesha/export "$(cat ~/export.json)" 
METHOD: POST
URL: https://192.168.39.115:30405/api/nfs-ganesha/export
DATA: {
      "cluster_id": "mynfs",
      "path": "/volumes/_nogroup/myfs-subvol",
      "fsal": {"name": "CEPH", "user_id":"admin", "fs_name": "myfs", "sec_label_xattr": null},
      "pseudo": "/cephfs-sub",
      "tag": null,
      "access_type": "RW",
      "squash": "no_root_squash",
      "protocols": [4],
      "transports": ["TCP"],
      "security_label": true,
      "daemons": ["mynfs.a", "mynfs.b"],
      "clients": []
 }

[varsha@localhost ~]$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS      RESTARTS   AGE
kube-system   coredns-5644d7b6d9-6x7fz                        1/1     Running     0          33m
kube-system   coredns-5644d7b6d9-c24x7                        1/1     Running     0          33m
kube-system   etcd-minikube                                   1/1     Running     0          31m
kube-system   kube-addon-manager-minikube                     1/1     Running     0          32m
kube-system   kube-apiserver-minikube                         1/1     Running     0          32m
kube-system   kube-controller-manager-minikube                1/1     Running     0          32m
kube-system   kube-proxy-k5rf9                                1/1     Running     0          33m
kube-system   kube-scheduler-minikube                         1/1     Running     0          32m
kube-system   storage-provisioner                             1/1     Running     0          33m
rook-ceph     csi-cephfsplugin-bt5sw                          3/3     Running     0          18m
rook-ceph     csi-cephfsplugin-provisioner-75c965db4f-bpbvr   4/4     Running     0          18m
rook-ceph     csi-cephfsplugin-provisioner-75c965db4f-xxlnc   4/4     Running     0          18m
rook-ceph     csi-rbdplugin-provisioner-56cbc4d585-tjsxc      5/5     Running     0          18m
rook-ceph     csi-rbdplugin-provisioner-56cbc4d585-wg9wz      5/5     Running     0          18m
rook-ceph     csi-rbdplugin-pwpb6                             3/3     Running     0          18m
rook-ceph     rook-ceph-mds-myfs-a-f76fb8c89-rb6k5            1/1     Running     0          3m50s
rook-ceph     rook-ceph-mds-myfs-b-549fb947c-f8rrf            1/1     Running     0          3m50s
rook-ceph     rook-ceph-mgr-a-85457df48d-7tb5c                1/1     Running     0          15m
rook-ceph     rook-ceph-mon-a-59c5765d4d-xrczr                1/1     Running     0          15m
rook-ceph     rook-ceph-mon-b-c7587d4c7-l52p2                 1/1     Running     0          5m30s
rook-ceph     rook-ceph-mon-c-7974885c94-hckrp                1/1     Running     0          5m15s
rook-ceph     rook-ceph-nfs-mynfs-a-554f84698-r8h9v           2/2     Running     0          7m4s
rook-ceph     rook-ceph-nfs-mynfs-b-55f7cb77cc-j67st          2/2     Running     0          6m57s
rook-ceph     rook-ceph-operator-fdfbcc5c5-vkkj8              1/1     Running     0          31m
rook-ceph     rook-ceph-osd-0-67dbfdbb5b-8657c                1/1     Running     0          15m
rook-ceph     rook-ceph-osd-prepare-minikube-zvmtw            0/1     Completed   0          4m43s
rook-ceph     rook-ceph-tools-856c5bc6b4-nffvc                1/1     Running     0          9m48s
rook-ceph     rook-discover-zv8k9                             1/1     Running     0          23m

[root@minikube /]# ceph -s
  cluster:
    id:     047ef89f-693d-46e9-aba6-7ca471ea6799
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 16m)
    mgr: a(active, since 16m)
    mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
    osd: 1 osds: 1 up (since 26m), 1 in (since 26m)

  data:
    pools:   3 pools, 28 pgs
    objects: 27 objects, 2.2 KiB
    usage:   5.1 GiB used, 11 GiB / 16 GiB avail
    pgs:     28 active+clean

  io:
    client:   1.4 KiB/s rd, 2 op/s rd, 0 op/s wr

Files

mgr_log.txt (294 KB) mgr_log.txt Varsha Rao, 09/28/2019 08:40 AM
Actions #1

Updated by Jeff Layton over 4 years ago

In my testing, it looks like rook is setting up the mgr dashboard service to use http instead of https:

++ curl --insecure -H 'Content-Type: application/json' -X POST -d '{"username":"admin","password":"4i9sRVSoZC"}' https://kube.poochiereds.net:31436/api/auth
++ sed -e 's/"//g'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

If I connect to the port with unencrypted http, then it works. I don't think we want to be exposing the dashboard via unencrypted http though.

Actions #2

Updated by Jeff Layton over 4 years ago

Aha! I think we need to set this under the dashboard: section of cluster-minimal.yaml:

ssl: true

...it looks like the cluster.yaml file got this directive set, but cluster-minimal.yaml didn't. Personally, it seems like the default ought to be "true" there, but for now I think we'll need this to work around it. I'll queue up a PR for rook too.

Actions #4

Updated by Varsha Rao over 4 years ago

Based on Jeff's above PR, adding new PR to fix cluster-test.yaml https://github.com/rook/rook/pull/4026

Actions #5

Updated by Varsha Rao over 4 years ago

  • Status changed from New to Resolved
Actions #6

Updated by Ernesto Puerta about 3 years ago

  • Project changed from mgr to Dashboard
  • Category changed from 144 to Component - NFS
Actions

Also available in: Atom PDF