Project

General

Profile

Actions

Bug #22632

open

radosgw - s3 keystone integration doesn't work while using civetweb as frontend

Added by Mateusz Los over 6 years ago. Updated almost 6 years ago.

Status:
Need More Info
Priority:
Normal
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I am using radosgw 12.2.2 with keystone integration enabled and serving s3 api through civetweb.
It works with users/keys generated with radosgw-admin cli, but with access/secret generated from horizon or with openstack cli I see this error:

Traceback (most recent call last):
  File "s3.py", line 18, in <module>
    bucket = conn.get_all_buckets()
  File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 440, in get_all_buckets
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><RequestId>tx0000000000000000016d9-005a53b147-261f62a-default</RequestId><HostId>261f62a-default-default</HostId></Error>

cat s3.py

import boto
import boto.s3.connection
access_key = {{ access key }}
secret_key = {{ secret }}

conn = boto.connect_s3(
       aws_access_key_id = access_key,
       aws_secret_access_key = secret_key,
       host = 's3.example.com', port=80,
       is_secure=False,               
       calling_format = boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.get_all_buckets()
print(bucket)

My radosgw configuration

with civetweb enabled:

[client.rgw.cmn01]
host = cmn01
keyring = /etc/ceph/ceph.client.rgw.cmn01.keyring
rgw socket path = /tmp/radosgw-cmn01.sock
log file = /var/log/ceph/ceph-rgw-cmn01.log
rgw data = /var/lib/ceph/radosgw/ceph-rgw.cmn01
rgw frontends = civetweb port=8080 num_threads=50
rgw dns name = example.com

rgw keystone api version = 3
rgw keystone url = 192.168.104.10:5000
rgw keystone accepted roles =  _member_, Member, admin, swiftoperator
rgw keystone revocation interval = 1000000
rgw keystone implicit tenants = false
rgw s3 auth use keystone = true
rgw keystone admin user = admin
rgw keystone admin password = password
rgw keystone verify ssl = False
rgw keystone admin project = admin
rgw keystone admin domain = default
rgw swift enforce content length = true

with socket enabled:

[client.rgw.ceph-mon01]
rgw keystone api version = 3
rgw keystone url = 192.168.104.10:5000
rgw keystone accepted roles =  _member_, Member, admin, swiftoperator
rgw keystone token cache size = 50
rgw keystone implicit tenants = true
rgw s3 auth use keystone = true
rgw keystone admin user = admin
rgw keystone admin password = password
rgw keystone verify ssl = False
rgw keystone admin domain = default
rgw keystone admin project = admin

host = ceph-mon01
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-mon01/keyring
rgw socket path = /tmp/radosgw-ceph-mon01.sock
log file = /var/log/ceph/ceph-rgw-ceph-mon01.log
rgw data = /var/lib/ceph/radosgw/ceph-rgw.ceph-mon01

rgw content length compat = True
rgw dns name = example.com

On my other environment with ceph jewel I switched from civetweb to unix socket + apache and I was able to list my buckets using keys generated with 'openstack ec2 credentials create'.
Unfortunately, the same configuration doesn't work in luminous - socket is not created during radosgw startup.

Actions

Also available in: Atom PDF