-1G - Client Security for CephFS¶
rturk don't worry, nobody is in track 2 either :) I think our lunch break was a bit short 12:29
rturk most of the inktank devs in LA are still in the lunch room 12:29
rturk I'll go get them 12:29
scuttlemonkey hehe 12:29
scuttlemonkey gregaf: we'll definitely want you for this one 12:31
rturk he is on the way 12:31
gregaf yeah, sorry, we were all shoveling food into our faces 12:31
gregaf same hangout url? 12:31
scuttlemonkey nah, ended up having to kill it...sent new 12:31
gregaf ah, got it 12:31
*** sjustlaptop has joined #ceph-summit1 12:33
saras boo 12:35
gregaf yes 12:35
saras at least i did not mess any thin 12:37
saras some please define pg and osd 12:40
*** paravoid has joined #ceph-summit1 12:40
dmick pg: placement group. Sort of the first level of sharding objects across servers: grossly, an object hashes into a pg, and then the pg is placed 12:40
dmick OSD: object storage daemon. The intelligent storage entity that maintains replication and fault tolerance 12:41
dmick and does most of the I/O 12:41
saras dmick: thanks very much 12:41
rturk (glossary coming in the docs very soon, I saw a sneek-preview) 12:41
*** cdl has quit IRC 12:42
*** cdl has joined #ceph-summit1 12:43
paravoid our use case won't be covered by this 12:46
paravoid the "perms per TB of data" 12:46
paravoid that's a shame :/ 12:46
saras paravoid: what whould you need 12:46
paravoid so basically we have a multi-tenant openstack nova setup and we'd like to give a shared filesystem per tenant, but over multiple VMs 12:47
Ryan_Lane a similar use case to sharing directories via nfs 12:48
paravoid hundreds of tenants (we call them "projects"), so a pool per project is excessive 12:48
paravoid oh Ryan_Lane you're here 12:48
saras paravoid: could join the hangout please 12:48
Ryan_Lane where x set of instances can read, and another set can write, etc 12:49
Ryan_Lane just think NFS exports :) 12:49
paravoid Ryan_Lane: want to do that? I think you'll be much better at it 12:49
Ryan_Lane hm 12:49
Ryan_Lane I'm in a coworking space. i think I can, though 12:49
*** loicd has joined #ceph-summit1 12:50
Ryan_Lane how do I join? 12:50
paravoid I think rturk or scuttlemonkey have to invite you 12:50
paravoid or maybe saras? 12:50
rturk scuttlemonkey has a link 12:50
scuttlemonkey yeah, sec 12:50
saras not me here 12:50
scuttlemonkey pm'd Ryan 12:51
saras any one in the hangout could copy the link into the chat 12:51
saras that works 12:51
scuttlemonkey yep 12:51
Ryan_Lane joined 12:51
paravoid the discussion seems to be very technical now, maybe they'll leave some space for comments at the end 12:52
* Ryan_Lane nods 12:52
Ryan_Lane so, I missed part of this 12:52
Ryan_Lane what was their current idea on how to handle it? 12:53
paravoid I missed half of it too, but there's this at the pad: -- Permissions per TB of data would scale well, permissions per MB of data would be excessive 12:53
saras soulds like their look at disc drive size 12:54
Ryan_Lane I don't think our use case is abnormal 12:54
paravoid Ryan_Lane is a colleague of mine 12:56
scuttlemonkey are there specific questions now that we're in the last 5 mins 12:56
scuttlemonkey ahh! 12:56
paravoid we both work at wikimedia 12:56
scuttlemonkey cool 12:56
gregaf we have 4 minutes left, any questions we can address? 12:56
paravoid and while I've been doing most of the ceph work so far, he's much better suited to talk about our cephfs plans 12:56
Ryan_Lane I'm working with openstack nova, which is why we need the multi-tenancy :) 12:56
paravoid we're expanding our use cases basically :) 12:56
AaronSchulz paravoid: we have cephfs plans? 12:57
paravoid AaronSchulz: labs 12:57
AaronSchulz news to me :) 12:57
paravoid replacement for gluster basically 12:57
paravoid well, not formalized plans, it'd just be nice to reuse our expertise 12:58
saras please 12:58