Project

General

Profile

Actions

Bug #23202

closed

radosgw-admin user create failing with return code 5

Added by Vasu Kulkarni about 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Tried with master and luminous, Cluster is setup with ceph-ansible.

ubuntu@ovh059:~$ sudo ceph --version
ceph version 12.2.4-10-gb033ed8 (b033ed823f76d0e69472f5e1f29b2108779852c7) luminous (stable)


ubuntu@ovh059:~$ sudo radosgw-admin user create --uid s3a --display-name=s3a cephtests --access-key=EGAQRD2ULOIFKFSKCT4F --secret-key=zi816w1vZKfaSM85Cl0BxXTwSLyN7zB4RbTswrGb --email=s3a@ceph.com
2018-03-02 19:46:38.592596 7fac3a98fcc0  1  Processor -- start
2018-03-02 19:46:38.592829 7fac3a98fcc0  1 -- - start start
2018-03-02 19:46:38.593426 7fac3a98fcc0  1 -- - --> 158.69.74.31:6789/0 -- auth(proto 0 34 bytes epoch 0) v1 -- 0x55fcbf2aafb0 con 0
2018-03-02 19:46:38.593443 7fac3a98fcc0  1 -- - --> 158.69.74.57:6789/0 -- auth(proto 0 34 bytes epoch 0) v1 -- 0x55fcbf2ab3f0 con 0
2018-03-02 19:46:38.594332 7fac27c88700  1 -- 158.69.74.57:0/1619633724 learned_addr learned my addr 158.69.74.57:0/1619633724
2018-03-02 19:46:38.595526 7fac26c86700  1 -- 158.69.74.57:0/1619633724 <== mon.1 158.69.74.57:6789/0 1 ==== mon_map magic: 0 v1 ==== 298+0+0 (2997352541 0 0) 0x7fac14001160 con 0x55fcbf2b0810
2018-03-02 19:46:38.595701 7fac26c86700  1 -- 158.69.74.57:0/1619633724 <== mon.1 158.69.74.57:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (1294858751 0 0) 0x7fac14001490 con 0x55fcbf2b0810
2018-03-02 19:46:38.595978 7fac26c86700  1 -- 158.69.74.57:0/1619633724 --> 158.69.74.57:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x7fac10001570 con 0
2018-03-02 19:46:38.596963 7fac26c86700  1 -- 158.69.74.57:0/1619633724 <== mon.1 158.69.74.57:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 206+0+0 (4086983138 0 0) 0x7fac14001080 con 0x55fcbf2b0810
2018-03-02 19:46:38.597075 7fac26c86700  1 -- 158.69.74.57:0/1619633724 --> 158.69.74.57:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- 0x7fac10001ee0 con 0
2018-03-02 19:46:38.598354 7fac26c86700  1 -- 158.69.74.57:0/1619633724 <== mon.1 158.69.74.57:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 

.
.
.

ubuntu@ovh059:~$ echo $?
5

Some additional logs: http://pulpito.ceph.com/vasu-2018-03-02_17:29:37-rgw-luminous-distro-basic-mira/2243818

Actions #1

Updated by Vasu Kulkarni about 6 years ago

  • Assignee set to Casey Bodley

Casey,

Is this also related to http://tracker.ceph.com/issues/22351?

I think a better error message would help in that case.

Thanks

Actions #2

Updated by Vasu Kulkarni about 6 years ago

  • Status changed from New to Resolved

Related to pg_num or pgp_num being high for the default mon_max_pg_per_osd, we need to have better error message from radosgw to just say that instead of cryptic error message. :(

tried with pg_num: 8 and pgp_num:8 and it all works fine
http://pulpito.ceph.com/vasu-2018-03-10_01:29:12-rgw-luminous-distro-basic-mira/

Actions #3

Updated by Matt Benjamin about 6 years ago

Does rgwrados have an opportunity to know the cause of this error?

Matt

Actions #4

Updated by Brad Hubbard about 6 years ago

To me this looks like a different code path to #22351 since the "pool or placement group misconfiguration" message is not seen in the rgw log.

Actions

Also available in: Atom PDF