Project

General

Profile

-1G - Client Security for CephFS » History » Version 1

Jessica Mack, 06/22/2015 05:46 AM

1 1 Jessica Mack
h1. -1G - Client Security for CephFS
2
3
<pre>
4
5
rturk	don't worry, nobody is in track 2 either :)  I think our lunch break was a bit short	12:29
6
rturk	most of the inktank devs in LA are still in the lunch room	12:29
7
rturk	I'll go get them	12:29
8
scuttlemonkey	hehe	12:29
9
scuttlemonkey	gregaf: we'll definitely want you for this one	12:31
10
rturk	he is on the way	12:31
11
gregaf	yeah, sorry, we were all shoveling food into our faces	12:31
12
gregaf	same hangout url?	12:31
13
scuttlemonkey	nah, ended up having to kill it...sent new	12:31
14
gregaf	ah, got it	12:31
15
*** sjustlaptop has joined #ceph-summit1	12:33
16
saras	boo	12:35
17
gregaf	yes	12:35
18
saras	at least i did not mess any thin	12:37
19
saras	some please define pg and osd	12:40
20
*** paravoid has joined #ceph-summit1	12:40
21
dmick	pg: placement group.  Sort of the first level of sharding objects across servers: grossly, an object hashes into a pg, and then the pg is placed	12:40
22
dmick	OSD: object storage daemon.  The intelligent storage entity that maintains replication and fault tolerance	12:41
23
dmick	and does most of the I/O	12:41
24
saras	dmick: thanks very much	12:41
25
rturk	(glossary coming in the docs very soon, I saw a sneek-preview)	12:41
26
*** cdl has quit IRC	12:42
27
*** cdl has joined #ceph-summit1	12:43
28
paravoid	our use case won't be covered by this	12:46
29
paravoid	the "perms per TB of data"	12:46
30
paravoid	that's a shame :/	12:46
31
saras	paravoid: what whould you need	12:46
32
paravoid	so basically we have a multi-tenant openstack nova setup and we'd like to give a shared filesystem per tenant, but over multiple VMs	12:47
33
Ryan_Lane	a similar use case to sharing directories via nfs	12:48
34
paravoid	hundreds of tenants (we call them "projects"), so a pool per project is excessive	12:48
35
paravoid	oh Ryan_Lane you're here	12:48
36
saras	paravoid: could join the hangout please	12:48
37
Ryan_Lane	where x set of instances can read, and another set can write, etc	12:49
38
Ryan_Lane	just think NFS exports :)	12:49
39
paravoid	Ryan_Lane: want to do that? I think you'll be much better at it	12:49
40
Ryan_Lane	hm	12:49
41
Ryan_Lane	I'm in a coworking space. i think I can, though	12:49
42
*** loicd has joined #ceph-summit1	12:50
43
Ryan_Lane	how do I join?	12:50
44
paravoid	I think rturk or scuttlemonkey have to invite you	12:50
45
paravoid	or maybe saras?	12:50
46
rturk	scuttlemonkey has a link	12:50
47
scuttlemonkey	yeah, sec	12:50
48
saras	not me here	12:50
49
scuttlemonkey	pm'd Ryan	12:51
50
saras	any one in the hangout could copy the link into the chat	12:51
51
saras	that works	12:51
52
scuttlemonkey	yep	12:51
53
Ryan_Lane	joined	12:51
54
paravoid	the discussion seems to be very technical now, maybe they'll leave some space for comments at the end	12:52
55
* Ryan_Lane nods	12:52
56
Ryan_Lane	so, I missed part of this	12:52
57
Ryan_Lane	what was their current idea on how to handle it?	12:53
58
paravoid	I missed half of it too, but there's this at the pad:     -- Permissions per TB of data would scale well, permissions per MB of data would be excessive	12:53
59
saras	soulds like their look at disc drive size	12:54
60
Ryan_Lane	I don't think our use case is abnormal	12:54
61
paravoid	Ryan_Lane is a colleague of mine	12:56
62
scuttlemonkey	are there specific questions now that we're in the last 5 mins	12:56
63
scuttlemonkey	ahh!	12:56
64
paravoid	we both work at wikimedia	12:56
65
scuttlemonkey	cool	12:56
66
gregaf	we have 4 minutes left, any questions we can address?	12:56
67
paravoid	and while I've been doing most of the ceph work so far, he's much better suited to talk about our cephfs plans	12:56
68
Ryan_Lane	I'm working with openstack nova, which is why we need the multi-tenancy :)	12:56
69
paravoid	we're expanding our use cases basically :)	12:56
70
AaronSchulz	paravoid: we have cephfs plans?	12:57
71
paravoid	AaronSchulz: labs	12:57
72
AaronSchulz	news to me :)	12:57
73
paravoid	replacement for gluster basically	12:57
74
paravoid	well, not formalized plans, it'd just be nice to reuse our expertise	12:58
75
saras	please	12:58
76
</pre>