Project

General

Profile

Actions

Bug #58048

open

«EPERM: error calling ceph_mount» when trying to use subvolume commands

Added by Jérôme Poulin over 1 year ago. Updated 12 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When trying to issue commands such as «ceph fs subvolume ls cephfsv2», we get a return of «Error EPERM: error calling ceph_mount» followed by the MDS logging:

2022-11-18T09:17:37.876-0500 7fca6fc3f700  1 mds.sg1vosrv44-2 parse_caps: cannot decode auth caps buffer of length 0
2022-11-18T09:17:37.876-0500 7fca6d43a700  1 mds.sg1vosrv44-2 parse_caps: cannot decode auth caps buffer of length 0
2022-11-18T09:17:37.876-0500 7fca6d43a700  0 mds.0.server  dropping message not allowed for this fs_name: client_session(request_open) v5

The issue appears to be the same on all MDS and all CephFS volumes including the default one.

ceph auth list shows the right permissions for the MDS:

cephfsv2 - 0 clients ========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active sg1vosrv44-2 Reqs: 0 /s 12 13 12 0
POOL TYPE USED AVAIL
cephfs.cephfsv2.meta metadata 31.7M 4119G
cephfs.cephfsv2.data data 0 4119G


[mds.sg1vosrv44-2]
key = AQC1YXZjPNDzIBAA85gIpirKeEgAWw1iMzi2Ow==
caps mds = "allow *"
caps mgr = "allow profile mds"
caps mon = "allow profile mds"
caps osd = "allow rwx pool=cephfs.cephfsv2.meta, allow rwx pool=cephfs.cephfsv2.data"

---

Filesystem 'cephfsv2' (4)
fs_name    cephfsv2
epoch    13077
flags    12
created    2022-11-17T11:47:13.955551-0500
modified    2022-11-17T17:57:01.479376-0500
tableserver    0
root    0
session_timeout    60
session_autoclose    300
max_file_size    1099511627776
required_client_features    {}
last_failure    0
last_failure_osd_epoch    215989
compat    compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds    1
in    0
up    {0=122409863}
failed    
damaged    
stopped    
data_pools    [57]
metadata_pool    56
inline_data    disabled
balancer    
standby_count_wanted    1
[mds.sg1vosrv44-2{0:122409863} state up:active seq 5 join_fscid=4 addr [v2:10.10.0.40:6834/3783754805,v1:10.10.0.40:6835/3783754805] compat {c=[1],r=[1],i=[7ff]}]


Files

ceph-mds.log.xz (52.9 KB) ceph-mds.log.xz Jérôme Poulin, 11/18/2022 02:25 PM
Actions #1

Updated by Jérôme Poulin over 1 year ago

Here is a log of the issue when running ceph-mds manually with the following command:
ceph-mds --setuser ceph --setgroup ceph -i sg1vosrv44-2 -d --debug-ms=20 --debug-mds=20 --debug-auth=20
ceph fs subvolume list cephfsv2

See at 2022-11-18T09:24:04.179-0500.

Actions #2

Updated by Jérôme Poulin over 1 year ago

The project should be CephFS on this issue but I can't seem to be able to change it myself.

Actions #3

Updated by Jérôme Poulin 12 months ago

This problem happens when the MGR isn't given access mds 'allow *'

The ACL should look like this:

mgr.sg1vosrv32
key: <key>
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *

Actions

Also available in: Atom PDF