Bug #2753
closedWrites to mounted Ceph FS fail silently if client has no write capability on data pool
0%
Description
Originally reported in http://marc.info/?l=ceph-devel&m=134151023912148&w=2:
How to reproduce (this is on a 3.2.0 kernel):
1. Create a client, mine is named "test", with the following capabilities:
client.test key: <key> caps: [mds] allow caps: [mon] allow r caps: [osd] allow rw pool=testpool
Note the client only has access to a single pool, "testpool".
2. Export the client's secret and mount a Ceph FS.
mount -t ceph -o name=test,secretfile=/etc/ceph/test.secret daisy,eric,frank:/ /mnt
This succeeds, despite us not even having read access to the "data" pool.
3. Write something to a file.
root@alice:/mnt# echo "hello world" > hello.txt root@alice:/mnt# cat hello.txt
This too succeeds.
4. Sync and clear caches.
root@alice:/mnt# sync root@alice:/mnt# echo 3 > /proc/sys/vm/drop_caches
5. Check file size and contents.
root@alice:/mnt# ls -la total 5 drwxr-xr-x 1 root root 0 Jul 5 17:15 . drwxr-xr-x 21 root root 4096 Jun 11 09:03 .. -rw-r--r-- 1 root root 12 Jul 5 17:15 hello.txt root@alice:/mnt# cat hello.txt root@alice:/mnt#
Note the reported file size in unchanged, but the file is empty.
Checking the "data" pool with client.admin credentials obviously shows
that that pool is empty, so objects are never written. Interestingly,
"cephfs hello.txt show_location" does list an object_name, identifying
an object which doesn't exist.
Is there any way to make the client fail with -EIO, -EPERM,
-EOPNOTSUPP or whatever else is appropriate, rather than pretending to
write when it can't?