Project

General

Profile

1G - Client Security for CephFS » History » Version 1

Jessica Mack, 06/22/2015 04:43 AM

1 1 Jessica Mack
h1. 1G - Client Security for CephFS
2
3
h3. Live Pad
4
5
The live pad can be found here: "[pad]":http://pad.ceph.com/p/Client_Security_for_CephFS
6
7
h3. Summit Snapshot
8
9
Coding tasks
10
11
# Add support for adding/showing/... the new capabilities to the authentication layer (?)
12
# Add checks against the new capabilities in the MDS layer.
13
14
Build / release tasks
15
16
# ???
17
18
Documentation tasks
19
20
# Document any new capabilities.
21
# Document any added 'ceph' or 'cephfs' sub-commands.
22
# ???
23
24
Deprecation tasks
25
26
# n/a
27
28
Notes from the discussion:
29
30
p(. - Short term, use Pools to segregate different authenticated parties, for both file and metadata storage
31
    - Namespaces may be worth considering in the future, but not at this time (they don't exist yet)
32
    - Performance shouldn't be worse than NFS, or the world will be angry, but no details at this time
33
    - Limit to the number of pools that are viable to use
34
    -- this limits the scope and scale of securing separate clients
35
    -- name spaces may end up with permissions across them, separate from pools and object prefixes.  Under discussion.
36
    -- name spaces would avoid pool limits, but renames across clients become a problem
37
    -- Permissions per TB of data would scale well, permissions per MB of data would be excessive
38
    -- Need more OSDs, you need to manage your client placement groups per OSD
39
    - Next steps
40
    -- Not an Inktank priority, it'd be done by third party dispatchers
41
    -- (see coding tasks above)
42
    -- insert a check_access_permissions() in the obvious, appropriate place
43
    --- parameters: either a list of paths or a message
44
    --- results: yes/no for all operations, or per read/write operation
45
    --- path, mask, yes/no. Pattern match against the capabilities
46
    --- OSD caps class used in the OSD to model the permissions checking against
47
    -- Not a large project
48
    -- mostly testing
49
    -- MDS handles an open inodes list, for open files that have been unlinked. 
50
    --- stray directory
51
    --- Give everyone permission to the stray directory in the short term, to avoid tracking who deleted inodes
52
    -- get the permission granularity correct
53
    -- test everything
54
    --- regression in other areas is not hard to test
55
    --- we have no tests of this yet
56
    --- boundary conditions around hard links
57
    --- creating as well as using hard links
58
    --- permissions for following and creating symlinks
59
    --- hardlinks to data stored in another pool
60
    - Opinions: "We need more fine grained permissions than per-pool".  Use case follows:
61
    -- using openstack nova
62
    -- large number of projects
63
    -- allocated shared storage per project
64
    -- glusterfs currently: not scaling due to finer grained permissions not being available
65
    -- they want one pool with different permissions for directories within the pool
66
    -- Think about the NFS shared export model
67
    -- This use case is not top of the list, because RADOS doesn't have a good mechanism for this yet
68
    -- Namespaces might be an answer for this eventually, but Ceph/RADOS isn't there yet
69
    
70
can we talk about use cases
71
72
p(. --- please
73
    
74
75
The primary use case that I know of is allowing multiple untrusted virtual machines to be CephFS clients, without giving them the ability to step on each others' toes.
76
It is already possible to segregate the filespace data into separate object pools, but the MDS has a single permission for all clients.  
77
This means that an untrusted client can add, remove, rename, etc. files that belong to any other untrusted client in the entire root path (I have my terminology wrong I'm sure) even if they only have access to the object store for a sub-path of that filesystem.