Project

General

Profile

Actions

1G - Client Security for CephFS

Live Pad

The live pad can be found here: [pad]

Summit Snapshot

Coding tasks

  1. Add support for adding/showing/... the new capabilities to the authentication layer (?)
  2. Add checks against the new capabilities in the MDS layer.

Build / release tasks

  1. ???

Documentation tasks

  1. Document any new capabilities.
  2. Document any added 'ceph' or 'cephfs' sub-commands.
  3. ???

Deprecation tasks

  1. n/a

Notes from the discussion:

- Short term, use Pools to segregate different authenticated parties, for both file and metadata storage
- Namespaces may be worth considering in the future, but not at this time (they don't exist yet)
- Performance shouldn't be worse than NFS, or the world will be angry, but no details at this time
- Limit to the number of pools that are viable to use
-- this limits the scope and scale of securing separate clients
-- name spaces may end up with permissions across them, separate from pools and object prefixes. Under discussion.
-- name spaces would avoid pool limits, but renames across clients become a problem
-- Permissions per TB of data would scale well, permissions per MB of data would be excessive
-- Need more OSDs, you need to manage your client placement groups per OSD
- Next steps
-- Not an Inktank priority, it'd be done by third party dispatchers
-- (see coding tasks above)
-- insert a check_access_permissions() in the obvious, appropriate place
--- parameters: either a list of paths or a message
--- results: yes/no for all operations, or per read/write operation
--- path, mask, yes/no. Pattern match against the capabilities
--- OSD caps class used in the OSD to model the permissions checking against
-- Not a large project
-- mostly testing
-- MDS handles an open inodes list, for open files that have been unlinked.
--- stray directory
--- Give everyone permission to the stray directory in the short term, to avoid tracking who deleted inodes
-- get the permission granularity correct
-- test everything
--- regression in other areas is not hard to test
--- we have no tests of this yet
--- boundary conditions around hard links
--- creating as well as using hard links
--- permissions for following and creating symlinks
--- hardlinks to data stored in another pool
- Opinions: "We need more fine grained permissions than per-pool". Use case follows:
-- using openstack nova
-- large number of projects
-- allocated shared storage per project
-- glusterfs currently: not scaling due to finer grained permissions not being available
-- they want one pool with different permissions for directories within the pool
-- Think about the NFS shared export model
-- This use case is not top of the list, because RADOS doesn't have a good mechanism for this yet
-- Namespaces might be an answer for this eventually, but Ceph/RADOS isn't there yet

can we talk about use cases

--- please

The primary use case that I know of is allowing multiple untrusted virtual machines to be CephFS clients, without giving them the ability to step on each others' toes.
It is already possible to segregate the filespace data into separate object pools, but the MDS has a single permission for all clients.
This means that an untrusted client can add, remove, rename, etc. files that belong to any other untrusted client in the entire root path (I have my terminology wrong I'm sure) even if they only have access to the object store for a sub-path of that filesystem.

Updated by Jessica Mack almost 9 years ago · 1 revisions