Actions
Bug #19940
closedclient.admin not granted mgr caps on upgrade (was: mgr access denied on pg dump)
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
This is a newly deployed master test cluster running ceph-12.0.2-1185.g3155335.el7.x86_64.
Once per minute our mgr is getting access denied on a pg dump:
2017-05-16 13:53:02.059795 client.156173 [INF] from='client.194556 128.142.215.68:0/3128562804' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mgr", ""], "format": "json"}]: access denied 2017-05-16 13:54:02.794116 client.156173 [INF] from='client.204392 128.142.215.68:0/2029671957' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mgr", ""], "format": "json"}]: access denied
128.142.215.68 is the active mgr p06636710a37514.
cluster f502e0e8-63e1-42c8-b38b-5b4f8daba3f8 health HEALTH_WARN too few PGs per OSD (1 < min 30) monmap e4: 3 mons at {p06636710a37514=128.142.215.68:6789/0,p06636710a59202=128.142.215.77:6789/0,p06636710a82299=128.142.215.81:6789/0} election epoch 34, quorum 0,1,2 p06636710a37514,p06636710a59202,p06636710a82299 mgr active: p06636710a37514 standbys: p06636710a82299, p06636710a59202 osdmap e898: 144 osds: 144 up, 144 in pgmap v3379: 64 pgs, 1 pools, 0 bytes data, 0 objects 161 GB used, 785 TB / 785 TB avail 64 active+clean
The mgr log says:
2017-05-16 13:57:02.313376 7f4a6918c700 1 mgr.server handle_command access denied 2017-05-16 13:57:02.313390 7f4a6918c700 0 log_channel(audit) log [INF] : from='client.204422 128.142.215.68:0/1044875011' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mgr", ""], "format": "json"}]: access denied 2017-05-16 13:57:02.313397 7f4a6918c700 1 mgr.server reply do_command r=-13 access denied
Could there be something wrong with our deployment or is this a bug?
Actions