Project

General

Profile

Openstack manila and ceph

Summary

Make CephFS easy to use and consume via OpenStack Manila (file as a service)

Owners

  • Sage Weil (Red Hat)
  • Name (Affiliation)
  • Name

Interested Parties

  • Danny Al-Gaaf (Deutsche Telekom)
  • Name (Affiliation)
  • Name

Current Status

There is a generic Manila driver that takes a volume mapped into the manila VM and exports it as NFS. This can be used in combination with standard RBD provisioned via Cinder.
There is a ganesha driver that uses Ganesha to reexport a shared file system via NFS. This can be configured to use the gluster FSAL driver or the Ceph FSAL driver, depending on how ganesha is configured.

Detailed Description

I see four current options:
  1. default driver: rbd mapped to a VM that exports NFS
  2. ganesha driver: cephfs mounted by ganesha FSAL in a VM, exporting NFS
  3. ceph native driver: set up shares/auth keys so that guest VMs can mount CephFS directly
    1. this driver needs to
      1. create a ceph auth key
      2. create a directory in cephfs
    2. there are several security and multitenancy gaps
      1. cephfs doesn't let you restrict a key to a specific subdir
      2. cephfs only restricts data path to a rados pool, currrently
        1. we could do a rados pool per user, but that would mostly suck
        2. we could add a namespace field to the ceph_file_layout policy and lock users in a namespace within a shared data pool
    3. guest VMs need to talk to ceph, which means they need to be on the same network, or on routable networks. this makes network people freak out for multitenant environments, but may not be an issue in single-tenant ones.
  4. virtfs to host mounting kernel cephfs: mount cephfs on the host. use 9p/virtfs to share this with the guest.
    1. is qemu virtfs maintained? i heard that IBM had abandoned this effort
    2. can we use NFS in place of virtfs here with equivalent functionality?

Work items

Coding tasks

  1. ceph native driver
  2. test ganesha + ceph FSAL in sepia community lab
  3. ?