Actions
Bug #20296
closedadmin keyrings should warn if they lack ceph-mgr permissions
% Done:
0%
Source:
Tags:
Backport:
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
After I deployed a Ceph cluster using DeepSea on SLE12SP3, I've been unable to issue `ceph pg dump`. Other commands work as expected.
pn-ceph-10:/etc/ceph # ceph pg dump Error EACCES: access denied pn-ceph-10:/etc/ceph # ceph auth list installed auth entries: mds.pn-ceph-10 key: AQARP0FZAAAAABAAlQ3carT6b7eATLZp+nrd9g== caps: [mds] allow * caps: [mon] allow profile mds caps: [osd] allow rwx mds.pn-ceph-11 key: AQARP0FZAAAAABAA22Z7TfrRfW/mrXNqikSE0A== caps: [mds] allow * caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: AQDXP0FZzOEiMBAATcZGFm0u6lcfKKYiMn9WKQ== caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQDYP0FZV0yKMBAAI/ISQVHqnmzjNzUZoTT4tw== caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQDYP0FZtFQKNBAASga4RF4kPAx5H5LlqceDQA== caps: [mon] allow profile osd caps: [osd] allow * osd.3 key: AQDcP0FZmomSNhAA3W18637gwOhQuyVkkuz5Bw== caps: [mon] allow profile osd caps: [osd] allow * osd.4 key: AQDeP0FZb+NpCRAAcuGuRP/T0qv0iHdVCUk7Dw== caps: [mon] allow profile osd caps: [osd] allow * osd.5 key: AQDeP0FZslTjABAA0URdNeaOrU7Bk3HH2qvwwA== caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQAOP0FZAAAAABAAaDSbZoyoYxXNNpCmHsRi0Q== caps: [mds] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-osd key: AQAPP0FZAAAAABAAz4PNDovndbmBX4ec0Y793Q== caps: [mon] allow profile bootstrap-osd client.igw.pn-ceph-12 key: AQAQP0FZAAAAABAAxK8QVJc2lt4LsvDJoYjK+w== caps: [mon] allow * caps: [osd] allow * client.rgw.pn-ceph-11 key: AQASP0FZAAAAABAAik+1BzYOFtO+ZoGva2nnlw== caps: [mon] allow rwx caps: [osd] allow rwx mgr.pn-ceph-10 key: AQDOP0FZUBDsHRAAFb7el6b7FkgDKV6ByEPhqA== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.pn-ceph-11 key: AQDOP0FZ0yfmEBAAHEc+sTXv7usHwhLrKsY6lQ== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.pn-ceph-12 key: AQDOP0FZ/+I/FRAAXeCVfCiBgSE2pkVRKSxRBQ== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * pn-ceph-10:~ # ceph osd dump epoch 27 fsid 3899cc28-31c8-353d-a7db-e0de70dac22f created 2017-06-14 15:53:18.553849 modified 2017-06-14 15:54:28.841371 flags sortbitwise full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 require_min_compat_client jewel min_compat_client jewel pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 15 flags hashpspool stripe_width 0 removed_snaps [1~3] pool 1 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 16 flags hashpspool stripe_width 0 pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 17 flags hashpspool stripe_width 0 pool 3 '.rgw.root' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 flags hashpspool stripe_width 0 pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 22 owner 18446744073709551615 flags hashpspool stripe_width 0 pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 24 owner 18446744073709551615 flags hashpspool stripe_width 0 pool 6 'default.rgw.log' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 25 owner 18446744073709551615 flags hashpspool stripe_width 0 max_osd 6 osd.0 up in weight 1 up_from 4 up_thru 24 down_at 0 last_clean_interval [0,0) 10.162.220.101:6801/15253 10.162.220.101:6802/15253 10.162.220.101:6803/15253 10.162.220.101:6804/15253 exists,up 0c9a009e-f627-4a87-a43c-2d659c46ce35 osd.1 up in weight 1 up_from 5 up_thru 25 down_at 0 last_clean_interval [0,0) 10.162.220.103:6800/14123 10.162.220.103:6801/14123 10.162.220.103:6802/14123 10.162.220.103:6803/14123 exists,up 42bdd708-a0d8-493b-b3d6-7f9ad38e7aba osd.2 up in weight 1 up_from 5 up_thru 25 down_at 0 last_clean_interval [0,0) 10.162.220.99:6800/20007 10.162.220.99:6801/20007 10.162.220.99:6802/20007 10.162.220.99:6803/20007 exists,up 6dceb991-8390-4ad5-b565-1ae4c7070657 osd.3 up in weight 1 up_from 9 up_thru 25 down_at 0 last_clean_interval [0,0) 10.162.220.101:6805/15753 10.162.220.101:6806/15753 10.162.220.101:6807/15753 10.162.220.101:6808/15753 exists,up 863923a2-ef37-427b-8cac-5e58c93e4f6a osd.4 up in weight 1 up_from 11 up_thru 25 down_at 0 last_clean_interval [0,0) 10.162.220.99:6804/20491 10.162.220.99:6805/20491 10.162.220.99:6806/20491 10.162.220.99:6807/20491 exists,up 08104452-da45-4a53-9f70-9091055bdae7 osd.5 up in weight 1 up_from 10 up_thru 25 down_at 0 last_clean_interval [0,0) 10.162.220.103:6804/14613 10.162.220.103:6805/14613 10.162.220.103:6806/14613 10.162.220.103:6807/14613 exists,up 86ad2214-eaaf-49e5-959b-b8e160660f0d pn-ceph-10:~ # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.02939 root default -2 0.00980 host pn-ceph-11 0 0.00490 osd.0 up 1.00000 1.00000 3 0.00490 osd.3 up 1.00000 1.00000 -3 0.00980 host pn-ceph-12 1 0.00490 osd.1 up 1.00000 1.00000 5 0.00490 osd.5 up 1.00000 1.00000 -4 0.00980 host pn-ceph-10 2 0.00490 osd.2 up 1.00000 1.00000 4 0.00490 osd.4 up 1.00000 1.00000 pn-ceph-10:~ # ceph version ceph version v12.0.3-1380-g6984d41b5d (6984d41b5d142ce157216b6e757bcb547da2c7d2) luminous (dev) pn-ceph-10:~ # ceph --version ceph version 12.0.3-1380-g6984d41b5d (6984d41b5d142ce157216b6e757bcb547da2c7d2) luminous (dev) pn-ceph-10:~ # ceph pg dump --debug-ms=1 2017-06-14 16:57:51.241197 7fdfe4e99700 1 Processor -- start 2017-06-14 16:57:51.241333 7fdfe4e99700 1 -- - start start 2017-06-14 16:57:51.241621 7fdfe4e99700 1 -- - --> 10.162.220.101:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- 0x7fdfe011ae00 con 0 2017-06-14 16:57:51.241641 7fdfe4e99700 1 -- - --> 10.162.220.103:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- 0x7fdfe011f1b0 con 0 2017-06-14 16:57:51.242129 7fdfde57b700 1 -- 10.162.220.99:0/360154925 learned_addr learned my addr 10.162.220.99:0/360154925 2017-06-14 16:57:51.242558 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 1 ==== mon_map magic: 0 v1 ==== 436+0+0 (3508171897 0 0) 0x7fdfd0001370 con 0x7fdfe01214e0 2017-06-14 16:57:51.242640 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (224495034 0 0) 0x7fdfd0001150 con 0x7fdfe01214e0 2017-06-14 16:57:51.242727 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fdfc0001780 con 0 2017-06-14 16:57:51.242750 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.2 10.162.220.103:6789/0 1 ==== mon_map magic: 0 v1 ==== 436+0+0 (3508171897 0 0) 0x7fdfd40012a0 con 0x7fdfe0124b30 2017-06-14 16:57:51.242920 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.2 10.162.220.103:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3472353743 0 0) 0x7fdfd4000c10 con 0x7fdfe0124b30 2017-06-14 16:57:51.242971 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.103:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fdfc0001bc0 con 0 2017-06-14 16:57:51.243149 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (4166467635 0 0) 0x7fdfd0000ff0 con 0x7fdfe01214e0 2017-06-14 16:57:51.243208 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x7fdfc0002370 con 0 2017-06-14 16:57:51.243408 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.2 10.162.220.103:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (968200535 0 0) 0x7fdfd4002200 con 0x7fdfe0124b30 2017-06-14 16:57:51.243460 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.103:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x7fdfc0005ea0 con 0 2017-06-14 16:57:51.243644 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 564+0+0 (3076846454 0 0) 0x7fdfd00017f0 con 0x7fdfe01214e0 2017-06-14 16:57:51.243716 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 >> 10.162.220.103:6789/0 conn(0x7fdfe0124b30 :-1 s=STATE_OPEN pgs=71 cs=1 l=1).mark_down 2017-06-14 16:57:51.243747 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- mon_subscribe({monmap=0+}) v2 -- 0x7fdfe011f730 con 0 2017-06-14 16:57:51.243773 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- mon_subscribe({mgrmap=0+}) v2 -- 0x7fdfe00b3280 con 0 2017-06-14 16:57:51.243821 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- mon_subscribe({osdmap=0}) v2 -- 0x7fdfe0129190 con 0 2017-06-14 16:57:51.244245 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 5 ==== mon_map magic: 0 v1 ==== 436+0+0 (3508171897 0 0) 0x7fdfd0000ff0 con 0x7fdfe01214e0 2017-06-14 16:57:51.244286 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 6 ==== mgrmap(e 4) v1 ==== 144+0+0 (1397159075 0 0) 0x7fdfd0001930 con 0x7fdfe01214e0 2017-06-14 16:57:51.244350 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 7 ==== osd_map(27..27 src has 1..27) v3 ==== 4018+0+0 (1184608376 0 0) 0x7fdfd0000ff0 con 0x7fdfe01214e0 2017-06-14 16:57:51.248641 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- 0x7fdfe007c450 con 0 2017-06-14 16:57:51.251126 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mon.1 10.162.220.101:6789/0 8 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) v1 ==== 72+0+48386 (1092875540 0 2584894726) 0x7fdfd0000960 con 0x7fdfe01214e0 2017-06-14 16:57:51.336027 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 --> 10.162.220.101:6800/14796 -- command(tid 0: {"prefix": "pg dump", "target": ["mgr", ""]}) v1 -- 0x7fdfe00b3280 con 0 2017-06-14 16:57:51.336617 7fdfdcd78700 1 -- 10.162.220.99:0/360154925 <== mgr.4102 10.162.220.101:6800/14796 1 ==== command_reply(tid 0: -13 access denied) v1 ==== 21+0+0 (1574247751 0 0) 0x7fdfd4000d50 con 0x7fdfc000bee0 Error EACCES: access denied 2017-06-14 16:57:51.338014 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 >> 10.162.220.101:6800/14796 conn(0x7fdfc000bee0 :-1 s=STATE_OPEN pgs=86 cs=1 l=1).mark_down 2017-06-14 16:57:51.338077 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 >> 10.162.220.101:6789/0 conn(0x7fdfe01214e0 :-1 s=STATE_OPEN pgs=65 cs=1 l=1).mark_down 2017-06-14 16:57:51.338197 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 shutdown_connections 2017-06-14 16:57:51.338300 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 shutdown_connections 2017-06-14 16:57:51.338336 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 wait complete. 2017-06-14 16:57:51.338342 7fdfe4e99700 1 -- 10.162.220.99:0/360154925 >> 10.162.220.99:0/360154925 conn(0x7fdfe010d5f0 :-1 s=STATE_NONE pgs=0 cs=0 l=0).mark_down pn-ceph-10:~ #
Updated by Greg Farnum almost 7 years ago
- Subject changed from `ceph pg dump` returns Error EACCES: access denied to admin keyrings should warn if they lack ceph-mgr permissions
- Assignee set to Greg Farnum
You need to get a
caps mgr = "allow *"stanza in your admin's keyring (as stored on the monitors, not the local file — check the docs for updating cephx capabilities). This is probably an issue with DeepSea not being updated for the new Luminous way of life, but I'll look and see if we can get a clearer warning generated — upgrading users should get their keys fixed automatically but you won't be the only one hitting this issue.
Updated by Greg Farnum almost 7 years ago
- Status changed from New to Fix Under Review
Updated by Greg Farnum almost 7 years ago
- Project changed from Ceph to mgr
- Category deleted (
110)
Updated by Sage Weil almost 7 years ago
- Status changed from Fix Under Review to Resolved
Actions