Bug #57154
openkernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps
0%
Description
A kclient sends cap updates as caller_uid:caller_gid 0:0. A fuse client sends cap updates as caller_uid:caller_gid -1:-1. If the either of the client uses a ceph ID with uid restricted caps, the caps updates are dropped by the MDS.
I restricted the kclient's MDS caps to specific uid, and the cap updates were sent as uid=0 and dropped by the MDS.
$ ./bin/ceph auth get-or-create client.uid_1000 mon 'allow r' mds 'allow r, allow rw path=/testdir00 uid=1000' osd 'allow rw' $ sudo ./bin/mount.ceph uid_1000@.a=/ /mnt/cephfs1/ $ cd /mnt/cephfs1/ $ cd testdir00 $ date > testfile.txt $ cat testfile.txt Fri Aug 5 04:35:34 PM EDT 2022 $ stat testfile.txt File: testfile.txt Size: 32 Blocks: 1 IO Block: 4194304 regular file Device: 3eh/62d Inode: 1099511628778 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ rraja) Gid: ( 1000/ rraja) $ su -c 'echo 3 > /proc/sys/vm/drop_caches' $ cat testfile.txt $ # file is empty
In the MDS logs, I see that the cap updates were dropped
2022-08-05T16:35:36.447-0400 7fbbb7791640 1 -- [v2:10.0.0.148:6810/1949999487,v1:10.0.0.148:6811/1949999487] <== client.4155 10.0.0.148:0/779216324 422 ==== client_caps(update ino 0x100000003ea 1 seq 1 tid 3 caps=pAsxLsXsxFsxcrwb dirty=Fw wanted=pAsxXsxFxwb follows 1 size 32/0 mtime 2022-08-05T16:35:34.681028-0400 xattrs(v=18446630786367633444 l=0)) v10 ==== 236+0+0 (crc 0 0 0) 0x56078e6afc00 con 0x56078e60a400 2022-08-05T16:35:36.447-0400 7fbbb7791640 7 mds.0.locker handle_client_caps on 0x100000003ea tid 3 follows 1 op update flags 0x1 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 mds.0.3 get_session have 0x56078e5d8000 client.4155 10.0.0.148:0/779216324 state open 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker head inode [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.cache pick_inode_snap follows 1 on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker follows 1 retains pAsxLsXsxFsxcrwb dirty Fw on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 7 mds.0.locker flush client.4155 dirty Fw seq 1 on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker _do_cap_update dirty Fw issued pAsxLsXsxFsxcrwb wanted pAsxXsxFxwb on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 mds.0.locker inode is file 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 mds.0.locker client has write caps; m->get_max_size=0; old_max=4194304 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 mds.0.3 get_session have 0x56078e5d8000 client.4155 10.0.0.148:0/779216324 state open 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 Session check_access path /testdir00/testfile.txt 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 MDSAuthCap is_capable inode(path /testdir00/testfile.txt owner 1000:1000 mode 0100664) by caller 0:0 mask 2 new 0:0 cap: MDSAuthCaps[allow r, allow rw path="/testdir00" uid=1000] 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker check_access failed, dropping cap update on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.3 send_message_client_counted client.4155 seq 13 client_caps(flush_ack ino 0x100000003ea 1 seq 1 tid 3 caps=pAsxLsXsxFsxcrwb dirty=Fw wanted=- follows 0 size 0/0 mtime 0.000000) v12 2022-08-05T16:35:36.447-0400 7fbbb7791640 1 -- [v2:10.0.0.148:6810/1949999487,v1:10.0.0.148:6811/1949999487] --> 10.0.0.148:0/779216324 -- client_caps(flush_ack ino 0x100000003ea 1 seq 1 tid 3 caps=pAsxLsXsxFsxcrwb dirty=Fw wanted=- follows 0 size 0/0 mtime 0.000000) v12 -- 0x56078e6ae000 con 0x56078e60a400 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker eval 3648 [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 7 mds.0.locker file_eval wanted=xwb loner_wanted=xwb other_wanted= filelock=(ifile excl) on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.447-0400 7fbbb7791640 20 mds.0.locker is excl 2022-08-05T16:35:36.447-0400 7fbbb7791640 7 mds.0.locker file_eval loner_issued=sxcrwb other_issued= xlocker_issued= 2022-08-05T16:35:36.447-0400 7fbbb7791640 10 mds.0.locker simple_eval (iauth excl) on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.448-0400 7fbbb7791640 10 mds.0.locker simple_eval (ilink sync) on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.448-0400 7fbbb7791640 10 mds.0.locker simple_eval (ixattr excl) on [inode 0x100000003ea [2,head] /testdir00/testfile.txt auth v6 s=0 n(v0 1=1+0) (iauth excl) (ifile excl) (ixattr excl) (iversion lock) cr={4155=0-4194304@1} caps={4155=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4155 | request=1 caps=1 0x56078e5a1700] 2022-08-05T16:35:36.448-0400 7fbbb7791640 10 mds.0.locker eval done 2022-08-05T16:35:36.452-0400 7fbbbaf98640 1 -- [v2:10.0.0.148:6810/1949999487,v1:10.0.0.148:6811/1949999487] <== osd.0 v2:10.0.0.148:6802/245705 60 ==== osd_op_reply(60 200.00000001 [write 18875~3552 [fadvise_dontneed]] v32'52 uv52 ondisk = 0) v8 ==== 156+0+0 (crc 0 0 0) 0x56078e46d200 con 0x56078e552400 2022-08-05T16:35:36.452-0400 7fbbb1785640 10 MDSIOContextBase::complete: 18C_MDS_openc_finish 2022-08-05T16:35:36.452-0400 7fbbb1785640 10 MDSContext::complete: 18C_MDS_openc_finish
Same issue is hit with a FUSE client with uid restricted MDS caps, where cap updates were sent as uid -1 and dropped by the MDS.
$ ./bin/ceph auth get client.uid_1000 [client.uid_1000] key = AQBkfO1iLX+bFRAARQOVtwR7/fnvgOlvNS5qkw== caps mds = "allow r, allow rw path=/testdir00 uid=1000" caps mon = "allow r" caps osd = "allow rw" $ sudo ./bin/ceph-fuse --id=uid_1000 /mnt/ceph-fuse/ $ cd /mnt/ceph-fuse/testdir00/ $ date > testfuse01.txt $ cat testfuse01.txt Fri Aug 5 05:05:21 PM EDT 2022 $ cd /mnt $ sudo umount /mnt/ceph-fuse/ $ sudo ./bin/ceph-fuse --id=uid_1000 /mnt/ceph-fuse/ $ cd /mnt/ceph-fuse/testdir00/ $ cat testfuse01.txt $ # file is empty
In the MDS logs, I see the caps being dropped
022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker head inode [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFscr/pFscr@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker follows 0 retains pAsLsXsFsc dirty - on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/pFscr@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker wanted pFscr -> - 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker _do_cap_update dirty - issued pAsLsXsFsc wanted - on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.locker inode is file 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker i want to change file_max, but lock won't allow it (yet) 2022-08-05T17:06:15.223-0400 7fbbb7791640 7 mds.0.locker file_excl (ifile sync) on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 7 mds.0.locker get_allowed_caps loner client.4160 allowed=pAsLsXsFsxcrwb, xlocker allowed=pAsLsXsFsxcrwb, others allowed=pAsLsXs on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (ifile excl) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.locker client.4160 pending pAsLsXsFsc allowed pAsLsXsFsxcrwb wanted - 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.locker !revoke and new|suppressed|stale, skipping client.4160 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.3 get_session have 0x56078e5d8000 client.4160 10.0.0.148:0/972546133 state open 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 Session check_access path /testdir00/testfuse01.txt 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 MDSAuthCap is_capable inode(path /testdir00/testfuse01.txt owner 1000:1000 mode 0100664) by caller 4294967295:4294967295 mask 2 new 0:0 cap: MDSAuthCaps[allow r, allow rw path="/testdir00" uid=1000] 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker check_access failed, dropping cap update on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (ifile excl) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker eval 3648 [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (ifile excl) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160 | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 10 mds.0.locker eval want loner: client.-1 but failed to set it 2022-08-05T17:06:15.223-0400 7fbbb7791640 7 mds.0.locker file_eval wanted= loner_wanted= other_wanted= filelock=(ifile excl) on [inode 0x10000000004 [2,head] /testdir00/testfuse01.txt auth v20 DIRTYPARENT s=0 n(v0 1=1+0) (ifile excl) (iversion lock) cr={4160=0-4194304@1} caps={4160=pAsLsXsFsc/-@4},l=4160(-1) | request=0 lock=0 caps=1 dirtyparent=1 dirty=1 authpin=0 0x56078e661b80] 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.locker is excl 2022-08-05T17:06:15.223-0400 7fbbb7791640 7 mds.0.locker file_eval loner_issued=sc other_issued= xlocker_issued= 2022-08-05T17:06:15.223-0400 7fbbb7791640 20 mds.0.locker should lose it
Updated by Ramana Raja over 1 year ago
- Subject changed from kernel/fuse client using ceph ID with uid restricted MDS caps cannot do cap updates to kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps
Updated by Ramana Raja over 1 year ago
This issue was first described in https://tracker.ceph.com/issues/56067#note-15
Updated by Venky Shankar over 1 year ago
- Category set to Correctness/Safety
- Assignee set to Xiubo Li
- Target version set to v18.0.0
- Backport set to pacific,quincy
Updated by Ramana Raja over 1 year ago
I think we need to look at session->check_access() call in Locker::_do_cap_update() . During cap update, does the MDS need to check the caller_uid and caller_gid as done by session->check_access()? We know that kclient, for example, doesn't keep track of caller_uid and caller_gid when updating caps [1]. The security implications of any changes around this needs to be understood. If we confirm that there are no security issues around dropping caller_uid, caller_gid check when doing cap update, that'd fix this tracker ticket as well as https://tracker.ceph.com/issues/56067
[1] https://github.com/ceph/ceph-client/blob/for-linus/fs/ceph/caps.c#L1292 within static void encode_cap_msg(struct ceph_msg *msg, struct cap_msg_args *arg)
/* * caller_uid/caller_gid (version 7) * * Currently, we don't properly track which caller dirtied the caps * last, and force a flush of them when there is a conflict. For now, * just set this to 0:0, to emulate how the MDS has worked up to now. */ ceph_encode_32(&p, 0); ceph_encode_32(&p, 0);
Updated by Patrick Donnelly over 1 year ago
- Related to Bug #56067: Cephfs data loss with root_squash enabled added
Updated by Xiubo Li over 1 year ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 48027
Updated by Xiubo Li 12 months ago
- Copied to Bug #61333: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps added
Updated by Patrick Donnelly 8 months ago
- Target version changed from v18.0.0 to v19.0.0
- Backport changed from pacific,quincy to reef,quincy
Updated by Rishabh Dave 7 months ago
- Status changed from Fix Under Review to Pending Backport
Updated by Backport Bot 7 months ago
- Copied to Backport #62951: quincy: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps added
Updated by Backport Bot 7 months ago
- Copied to Backport #62952: reef: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps added
Updated by Xiubo Li 7 months ago
Also need to backport https://github.com/ceph/ceph/pull/53887 together.
Updated by Xiubo Li 5 months ago
- Copied to Backport #63832: pacific: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps added
Updated by Patrick Donnelly about 13 hours ago
- Related to Bug #65733: mds: upgrade to MDS enforcing CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK with client having root_squash in any MDS cap causes eviction for all file systems the client has caps for added