Project

General

Profile

Actions

Feature #1401

closed

Support mutually untrusting clients using the same Ceph cluster

Added by Anonymous over 12 years ago. Updated over 12 years ago.

Status:
Closed
Priority:
Low
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

QA & document this feature -- all the necessary code might already be there.

From the mailing list:

2011/8/16 Maciej Ga?kiewicz <maciejgalkiewicz@ragnarson.com>:
> I have a storage cluster based on glusterfs. Each client have its own
> volume and his is not able to mount any other. Is it possible to
> implement such separation with ceph? I have read about authentication
> and mounting subdirectories. I am not sure if I can configure
> different login/pass for selected directories stored in ceph. All
> clients are running on Xen's domU (virtual machine). Maybe there is
> some other way to achieve this?

This seems to be possible with Ceph as it is now, but it is definitely
not the normal setup. As in, test carefully and understand you're
going off the beaten path.

To avoid confusion with the word "client", I'll call these
mutually-untrusting domains "customers". Each customer can have
multiple clients.

It builds up something like this:

1. Each customer naturally needs separate keys (really, every client
should have a separate key).
2. Because clients talk to OSDs directly, you need to have different
trust domains use different pools (otherwise they can just read/write
each other's raw objects); use "cauthtool --cap osd 'allow rwx
pool=something'" on the client/customer key to specify who can write
where.
3. You probably want the root of your ceph filesystem stored in
pool=data, but give most clients just read-only access to it.
4. Use "cephfs /your/ceph/mountpoint/customerA set_location --pool
customerA" to tell the MDS what pool subtrees of your ceph filesystem
are stored in.
5. Tell clients to mount their part of the filesystem directly, use
your-ceph-mon-here:/customerA as the mount device.

Disclaimer: not tested, not security audited, customers can still DoS
each other etc nasty things, your mileage may vary, if you break it we
will probably help you fix at least one of the halves, warning this
sign has sharp edges.
Actions #1

Updated by Sage Weil over 12 years ago

The last piece is to restrict a client to only be allowed to mount a subtree on the mds. This means extending the mds caps syntax and enforcing the restriction on the mds.

Actions #2

Updated by Sage Weil over 12 years ago

  • Target version set to 12
Actions #3

Updated by Anonymous over 12 years ago

Copy-pasting IRC conversation for the record:


Tv: sagewk: regarding #1401, soo if the mds enforcement isn't there, but you can't access the underlying pool... that means you can still run find / and see other customers' data?
sagewk: you could still mount / and see the file names, but not read them
Tv: ok
sagewk: and if the uids aren't distinct you could delete them, create new files, rename, etc.
sagewk: or you're root
gregaf: sagewk: aren't normal *nix perms enforced on the MDS?
sagewk: gregaf: nope
gregaf: or do our MDS caps not support only allowing specific user IDs right now
gregaf: ah
Tv: err durr so root@customerA gets to rm -rf /customerB
Tv: that's sad
sagewk: right.  that's why we need the mds caps piece to lock a client into a subdir
Tv: so what's the ceph-level "uid" thing that i've seen somewhere?
sagewk: the auid in the cephx caps?
Tv: yeah auid
gregaf: that's a rados-level authenticated user id
gregaf: it's how we can do the pool caps, but it's not related to *nix users
sagewk: just an id on the cephx user.  i think it' sjust used for pool ownership so far.
Tv: i'm not clear on the difference between having a different auid vs having a different key with different caps
sagewk: you can write teh cap in terms of the auid, to say 'read and write any pool with owner=my_auid'
Tv: ahhh
sagewk: instead of explicitly listing the pools you own/created
Actions #4

Updated by Anonymous over 12 years ago

Depends on #1402.

Actions #5

Updated by Anonymous over 12 years ago

Tommi Virtanen wrote:

Depends on #1402.

Make that #1237.

Actions #6

Updated by Sage Weil over 12 years ago

  • Status changed from New to Closed

Don't think there's anything in here not covered by #1237.

Actions #7

Updated by Sage Weil over 12 years ago

  • Target version deleted (12)
Actions

Also available in: Atom PDF