Bug #47921
closedBad auth caps for orchestrated mds daemon
0%
Description
mds created by "ceph orch apply mds 1" get auth caps of
> mds.test_fs.******.gpqtol > key: ***********************== > caps: [mds] allow > caps: [mon] profile mds > caps: [osd] allow rw tag cephfs *=*
This auth caps will leads to
> [ERR] MDS_DAMAGE: 1 mds daemon damaged > fs test_fs mds.0 is damaged
During any mds failure when the fs is mounted and in operations
The working mds should have caps of
> mds.FlexCephAF1b > key: AQCCW29e1ZY1NRAAwjSB6zG6DMzM9wusyRPgQA== > caps: [mds] allow * *> caps: [mgr] profile mds* > caps: [mon] profile mds > caps: [osd] allow *
Updated by Yanshuo Li over 3 years ago
mds created by "ceph orch apply mds 1" get auth caps of
mds.test_fs.******.gpqtol
key: *******==
caps: [mds] allow
caps: [mon] profile mds
caps: [osd] allow rw tag cephfs *=*
This auth caps will leads to
[ERR] MDS_DAMAGE: 1 mds daemon damaged
fs test_fs mds.0 is damaged
During any mds failure when the fs is mounted and in operations
The working mds should have caps of
mds.***********
key: *****************==
caps: [mds] allow
*caps: [mgr] profile mds
caps: [mon] profile mds
caps: [osd] allow *
Updated by Sebastian Wagner over 3 years ago
- Project changed from Ceph to Orchestrator
- Category set to cephadm
Updated by Sebastian Wagner about 3 years ago
- Priority changed from Normal to Low
the problem is probably not related to the missing caps. Are you really sure this is the correct solution?
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Can't reproduce