Project

General

Profile

Actions

Bug #17937

closed

file deletion permitted from pool Y from client mount with pool=X capabilities

Added by David Disseldorp over 7 years ago. Updated over 7 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
kcephfs
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Not sure whether this is a bug per se, but IMO certainly falls into the strange
behaviour basket.

If I have a filesystem with two pools assigned to it, say X and Y, and limit a
client to pool X (osd 'allow rw pool=X'), then reads from files within pool Y
fail with EPERM. However, the same client can successfully delete such files,
which sees the underlying RADOS objects also removed.

MDS path restrictions are not used, so perhaps pool/namespace restrictions
are not supported without a corresponding MDS path restriction?

Test procedure:
- start a cluster with an MDS and provision a filesystem
+ e.g. OSD=3 MON=1 RGW=0 MDS=2 ./vstart.sh i 192.168.155.1 --new
create a new pool and add it to the FS
+ rados mkpool new_pool
+ ceph fs add_data_pool cephfs_a new_pool
+ ceph fs ls
- mount the FS using the admin keyring
+ mount t ceph 192.168.155.1:6789:/ /mnt/cephfs -o name=admin,secret=...
create a file and assign it to new_pool
+ touch /mnt/cephfs/admin_only
+ setfattr n ceph.file.layout.pool -v "new_pool" /mnt/cephfs/admin_only
add some data to the file
+ dd if=/dev/zero of=/mnt/cephfs/admin_only bs=1M count=10 && sync
- create a "non-admin" client key that doesn't have access to "new_pool"
+ ceph auth get-or-create client.non-admin mds 'allow rw' mon 'allow rw' osd 'allow rw pool=cephfs_data_a'
- mount the filesystem using the "non-admin" creds
+ mount t ceph 192.168.155.1:6789:/ /mnt/cephfs -o name=non-admin,secret=...
attempt to read the admin only file
+ cat /mnt/cephfs/admin_only
cat: /mnt/cephfs/admin_only: Operation not permitted
- attempt to delete the same file
+ rm /mnt/cephfs/admin_only
- check whether the rados objects backing the file are still there
+ rados ls -p new_pool
(no objects)

I could have expected that the MDS would allow for the file removal to succeed (given that there are no MDS path restrictions), but would never have thought that the objects backing the file would be removed.

Actions #1

Updated by John Spray over 7 years ago

  • Status changed from New to Won't Fix

MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported without a corresponding MDS path restriction?

That's correct -- pool caps restrict what the client can do on the OSDs (i.e. read/write objects), and path caps restrict what they can do on the MDS (e.g. unlinking files). The ultimate removal of objects from unlinked inodes is done by the MDS, so the client's OSD auth caps do not affect that.

Actions #2

Updated by David Disseldorp over 7 years ago

John Spray wrote:

MDS path restrictions are not used, so perhaps pool/namespace restrictions are not supported without a corresponding MDS path restriction?

That's correct -- pool caps restrict what the client can do on the OSDs (i.e. read/write objects), and path caps restrict what they can do on the MDS (e.g. unlinking files). The ultimate removal of objects from unlinked inodes is done by the MDS, so the client's OSD auth caps do not affect that.

Thanks for the clarification John. in that case I'll submit an update for http://docs.ceph.com/docs/master/cephfs/client-auth/#osd-restriction that includes a note about delete behaviour.

Actions

Also available in: Atom PDF