Project

General

Profile

Bug #22632

radosgw - s3 keystone integration doesn't work while using civetweb as frontend

Added by Mateusz Los over 3 years ago. Updated about 3 years ago.

Status:
Need More Info
Priority:
Normal
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I am using radosgw 12.2.2 with keystone integration enabled and serving s3 api through civetweb.
It works with users/keys generated with radosgw-admin cli, but with access/secret generated from horizon or with openstack cli I see this error:

Traceback (most recent call last):
  File "s3.py", line 18, in <module>
    bucket = conn.get_all_buckets()
  File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 440, in get_all_buckets
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><RequestId>tx0000000000000000016d9-005a53b147-261f62a-default</RequestId><HostId>261f62a-default-default</HostId></Error>

cat s3.py

import boto
import boto.s3.connection
access_key = {{ access key }}
secret_key = {{ secret }}

conn = boto.connect_s3(
       aws_access_key_id = access_key,
       aws_secret_access_key = secret_key,
       host = 's3.example.com', port=80,
       is_secure=False,               
       calling_format = boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.get_all_buckets()
print(bucket)

My radosgw configuration

with civetweb enabled:

[client.rgw.cmn01]
host = cmn01
keyring = /etc/ceph/ceph.client.rgw.cmn01.keyring
rgw socket path = /tmp/radosgw-cmn01.sock
log file = /var/log/ceph/ceph-rgw-cmn01.log
rgw data = /var/lib/ceph/radosgw/ceph-rgw.cmn01
rgw frontends = civetweb port=8080 num_threads=50
rgw dns name = example.com

rgw keystone api version = 3
rgw keystone url = 192.168.104.10:5000
rgw keystone accepted roles =  _member_, Member, admin, swiftoperator
rgw keystone revocation interval = 1000000
rgw keystone implicit tenants = false
rgw s3 auth use keystone = true
rgw keystone admin user = admin
rgw keystone admin password = password
rgw keystone verify ssl = False
rgw keystone admin project = admin
rgw keystone admin domain = default
rgw swift enforce content length = true

with socket enabled:

[client.rgw.ceph-mon01]
rgw keystone api version = 3
rgw keystone url = 192.168.104.10:5000
rgw keystone accepted roles =  _member_, Member, admin, swiftoperator
rgw keystone token cache size = 50
rgw keystone implicit tenants = true
rgw s3 auth use keystone = true
rgw keystone admin user = admin
rgw keystone admin password = password
rgw keystone verify ssl = False
rgw keystone admin domain = default
rgw keystone admin project = admin

host = ceph-mon01
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-mon01/keyring
rgw socket path = /tmp/radosgw-ceph-mon01.sock
log file = /var/log/ceph/ceph-rgw-ceph-mon01.log
rgw data = /var/lib/ceph/radosgw/ceph-rgw.ceph-mon01

rgw content length compat = True
rgw dns name = example.com

On my other environment with ceph jewel I switched from civetweb to unix socket + apache and I was able to list my buckets using keys generated with 'openstack ec2 credentials create'.
Unfortunately, the same configuration doesn't work in luminous - socket is not created during radosgw startup.

History

#1 Updated by Abhishek Lekshmanan over 3 years ago

Does the error reproduce when `rgw dns name` is commented out from ceph.conf. What happens when rgw dns name is set to s3.example.com (the host you're using in boto)

#2 Updated by Abhishek Lekshmanan over 3 years ago

  • Status changed from New to Need More Info

#3 Updated by Abhishek Lekshmanan over 3 years ago

  • Assignee set to Abhishek Lekshmanan

#4 Updated by Mateusz Los over 3 years ago

Abhishek Lekshmanan wrote:

Does the error reproduce when `rgw dns name` is commented out from ceph.conf. What happens when rgw dns name is set to s3.example.com (the host you're using in boto)

Hello Abhishek,
result is the same with rgw dns name commented out. with s3.example.com in boto script result is also the same.

Like I said before, I can use s3 with keys synced from keystone only when I set radosgw frontend to socket+apache in jewel, for luminous I still don't have workaround.

#5 Updated by Abhishek Lekshmanan over 3 years ago

Can you provide us the rgw logs with debug rgw=20 and debug ms=1 set in ceph.conf

#6 Updated by Abhishek Lekshmanan over 3 years ago

What is the version of openstack keystone that is running?

#7 Updated by Eugene Nikanorov over 3 years ago

The issue was found via openstack ocata.

Relevant rgw logs:
2018-03-25 05:27:22.176125 7f864665c700 20 found cached admin token
2018-03-25 05:27:22.176159 7f864665c700 20 sending request to x.x.x.x:5000/v3/s3tokens
2018-03-25 05:27:22.176165 7f864665c700 20 ssl verification is set to off
2018-03-25 05:27:22.202620 7f864665c700 2 s3 keystone: token parsing failed, ret=0-22
2018-03-25 05:27:22.202660 7f864665c700 20 rgw::auth::keystone::EC2Engine denied with reason=-22
2018-03-25 05:27:22.202662 7f864665c700 20 rgw::auth::s3::AWSv2ExternalAuthStrategy denied with reason=-22
2018-03-25 05:27:22.202663 7f864665c700 20 rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::LocalEngine
2018-03-25 05:27:22.202677 7f864665c700 10 get_canon_resource(): dest=/

If an incorrect api key is provided in configuration, it fails authorization before "token parsing failed", so no such message is produced.
Let us know if you need more information.

#8 Updated by Massimo Sgaravatto over 3 years ago

I am affected by the same problem (or at least a problem with the very same symptoms)

I am also running Ocata in the openstack cloud

#9 Updated by Massimo Sgaravatto about 3 years ago

I was wondering if there are some news wrt this issue

For the time being the only workaround I found to allow OpenStack users to use S3 is to explicitly generate the S3 keys for the Openstack user accounts, e.g:

radosgw-admin key create --key-type=s3 --gen-access-key --gen-secret --uid="a22db12575694c9e9f8650dde73ef565" --rgw-realm=cloudtest

But this means that different credentials have to be used when accessing radosgw via S3 and when accessing OpenStack compute services using EC2, which is not an ideal scenario ...

Thanks, MAssimo

#10 Updated by Massimo Sgaravatto about 3 years ago

Please see this post:

https://ask.openstack.org/en/question/106557/swift3s3-api-errors-when-authenticating-with-ec2-keys/

where they say there is indeed a problem with Ocata

I tried to implement the changes in keystone reported in:

https://review.openstack.org/#/c/437012/

and now it works for me (I don't know if it is the same issue for the person who created this issue)

Cheers, Massimo

Also available in: Atom PDF