Project

General

Profile

Actions

Support #37157

open

how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ?

Added by Benjamin Lu over 5 years ago. Updated over 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

ceph --version
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)

Cluster status is healthy.

I have a Ceph Object Gateway configured to use Ceph Storage cluster, tested S3/Swift "testuser" with "my_new_bucket" created by following the guide below, all works !
http://docs.ceph.com/docs/mimic/install/install-ceph-gateway/#using-the-gateway

I want to test more from using Microsoft AD user to write from Ceph Object Gateway node to use Ceph Object Storage function, following the guide:
1). http://docs.ceph.com/docs/mimic/radosgw/ldap-auth/
2). https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_object_gateway_with_ldapad_guide/index#rgw-ldap-test-the-configuration-ldap

I did setup AD/DNS server, and registered Ceph Object Gateway node for services, tested OS access on Ceph Object Gateway node from AD domain user, it works. Next I follow the doc above to use AD user for S3/swift write on Object Gateway.

Issue: there is no details, and no examples on how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user.

4.2. Export an LDAP Token: ( Red Hat Doc )

  1. export RGW_ACCESS_KEY_ID="<username>"
  2. export RGW_SECRET_ACCESS_KEY="<password>"
  3. radosgw-token --encode --ttype=ad
  4. export RGW_ACCESS_KEY_ID="*****************************************************************"

4.3. Test the Configuration with an S3 Client ( Red Hat Doc), ---The secret is no longer required !!!

Question_1:

If I have an AD user "ceph_user", with password as "ceph_user_passwd", run test below:

  1. export RGW_ACCESS_KEY_ID="ceph_user"
  2. export RGW_SECRET_ACCESS_KEY="ceph_user_passwd"
  3. radosgw-token --encode --ttype=ad
  4. export RGW_ACCESS_KEY_ID="*****", should this step use output from radosgw-token for "*****", is this make "ceph_user" = "radosgw-token" ? Why ?

Question_2:

S3 python API uses 2 lines below to get authentication pass in ceph object storage, what should be used as for adosgw-token here ?(http://docs.ceph.com/docs/mimic/radosgw/s3/python/ )

aws_access_key_id = access_key,
aws_secret_access_key = secret_key,

Question_3:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_with_ldapad_guide/rgw-ldap-configure-ldap-and-ceph-object-gateway-ldap:

2.6. Add a Gateway User:

[global]
...
rgw_ldap_secret = /etc/bindpass

...

What is the contains for file "/etc/bindpass" ? Does anyone has an example in more details ?

Thanks for help !

Ben

Actions #1

Updated by Benjamin Lu over 5 years ago

Add more details in my test, get error "403 Forbidden" :

[testobj@obj_gtway01 ~]$ cat /etc/ceph/ceph.conf
[global]
fsid = 2cd4ded4-763a-4c65-b650-7d8234c37f00
public_network = 10.0.100.0/24
cluster_network = 10.0.10.0/24
mon_initial_members = node0015, node0016, node0017
mon_host = 10.0.100.15,10.0.100.16,10.0.100.17
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_allow_pool_delete = true

  1. config object gataway to use AD/LDAP authentication:
    rgw_ldap_secret = "/etc/bindpass"
    rgw_ldap_uri = ldaps://10.0.100.18:636
    rgw_ldap_binddn = "uid=ceph,cn=Users,cn=accounts,dc=gokeuslab,dc=local"
    rgw_ldap_searchdn = "cn=Users,cn=accounts,dc=gokeuslab,dc=local"
    rgw_ldap_dnattr = "cn"
    rgw_s3_auth_use_ldap = true

rgw_override_bucket_index_max_shards = 10
#debug ms = 1
#debug rgw = 20

[client.rgw.object_gateway01]
rgw_frontends = "civetweb port=80"

[client]
mon_host = 10.0.100.15:6789,10.0.100.16:6789,10.0.100.17:6789

====================================================================

[testobj@obj_gtway01 ~]$ ldapsearch -x -D "ceph" -W -h 10.0.100.18 -b "ou=Users,dc=gokeuslab,dc=local" -s sub 'uid=ceph'
Enter LDAP Password:
  1. extended LDIF #
  2. LDAPv3
  3. base <ou=Users,dc=gokeuslab,dc=local> with scope subtree
  4. filter: uid=ceph
  5. requesting: ALL #
  1. search result
    search: 2
    result: 32 No such object
    matchedDN: DC=gokeuslab,DC=local
    text: 0000208D: NameErr: DSID-03100241, problem 2001 (NO_OBJECT), data 0, best
    match of:
    'DC=gokeuslab,DC=local'
  1. numResponses: 1
    [testobj@obj_gtway01 ~]$
    [testobj@obj_gtway01 ~]$ ldapsearch -x -D "testobj" -W -h 10.0.100.18 -b "ou=Users,dc=gokeuslab,dc=local" -s sub 'uid=testobj'
    Enter LDAP Password:
  2. extended LDIF #
  3. LDAPv3
  4. base <ou=Users,dc=gokeuslab,dc=local> with scope subtree
  5. filter: uid=testobj
  6. requesting: ALL #
  1. search result
    search: 2
    result: 32 No such object
    matchedDN: DC=gokeuslab,DC=local
    text: 0000208D: NameErr: DSID-03100241, problem 2001 (NO_OBJECT), data 0, best
    match of:
    'DC=gokeuslab,DC=local'
  1. numResponses: 1
    [testobj@obj_gtway01 ~]$

=======================================================================

[testobj@obj_gtway01 ~]$ export RGW_ACCESS_KEY_ID="testobj"
[testobj@obj_gtway01 ~]$ export RGW_SECRET_ACCESS_KEY=""
[testobj@obj_gtway01 ~]$ radosgw-token --encode --ttype=ad
ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJ0ZXN0b2JqIiwKICAgICAgICAia2V5IjogIiIKICAgIH0KfQo=

[testobj@obj_gtway01 ~]$ export RGW_ACCESS_KEY_ID="ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJ0ZXN0b2JqIiwKICAgICAgICAia2V5IjogIiIKICAgIH0KfQo="

[testobj@obj_gtway01 ~]$ vi create_s3_test_bucket.py
[testobj@obj_gtway01 ~]$ cat create_s3_test_bucket.py
#! /usr/bin/python
import boto
import boto.s3.connection
#access_key = 'put your access key here!'
#secret_key = 'put your secret key here!'
access_key = "ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJ0ZXN0b2JqIiwKICAgICAgICAia2V5IjogIiIKICAgIH0KfQo="
secret_key = ""

conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'obj_gtway01', port = 7480,
is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)

bucket = conn.create_bucket('s3_bucket')
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)

[testobj@obj_gtway01 ~]$ chmod 755 create_s3_test_bucket.py

[testobj@obj_gtway01 ~]$ ./create_s3_test_bucket.py
Traceback (most recent call last):
File "./create_s3_test_bucket.py", line 17, in <module>
bucket = conn.create_bucket('s3_bucket')
File "/usr/lib/python2.7/site-packages/boto/s3/connection.py", line 625, in create_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<Error><Code>AccessDenied</Code><RequestId>tx00000000000000000000d-005becd4ed-346704-default</RequestId><HostId>346704-default-default</HostId></Error>
[testobj@obj_gtway01 ~]$

Actions #2

Updated by Matt Benjamin over 5 years ago

Hi Benjamin,

1. setting the user to be the token transfers the bundle unmodified to RGW--when the S3 signing method is v2--you can't use v4 ** but see below
2. the secret key can be anything--I think it can be blank
3. /etc/bindpass contains a password for the binddn, which is the ldap credential RGW will use to search ldap

  • in recent RGW, it is now possible to use aws v4 signing in combination with ldap--by using ldap only to perform GetSessionToken, which can return an STS::Lite temporary credential--the former operation would need to use v2, but after establishing an STS token, v4 signing could be used
Actions #3

Updated by Benjamin Lu over 5 years ago

3. /etc/bindpass contains a password for the binddn, which is the ldap credential RGW will use to search ldap

  • in recent RGW, it is now possible to use aws v4 signing in combination with ldap--by using ldap only to perform GetSessionToken, which can return an STS::Lite temporary credential--the former operation would need to use v2, but after establishing an STS token, v4 signing could be used

-------------------------------
Hi Matt,

Thank you for spending time on my question.

I still have question about file "/etc/bindpass", is this file store all AD user password same as in AD ? Or the password for the binddn is something like "secret_key" or "token"? Do you have an example?

Best regards,
Ben

Actions #4

Updated by Benjamin Lu over 5 years ago

Hi Matt,

Here is the solution that fixed the AD user access to RGW using its token with LDAP in my test Env, there is nothing related to v2, v4 signing.

Step 1:
Put below lines in "/etc/ceph/ceph.conf" on my RGW server "rgw.obj_gtway01" :

  1. config object gataway to use AD/LDAP authentication:
    rgw_ldap_uri = "ldap://10.0.100.18"
    rgw_ldap_binddn = "" ## "mylab.local" is my AD domain.
    rgw_ldap_secret = "/etc/bindpasswd"
    rgw_ldap_searchdn = "cn=users,dc=mylab,dc=local"
    rgw_ldap_searchfilter = "(&(objectClass=user)(sAMAccountName=USERNAME))"
    rgw_s3_auth_use_ldap = true

Step 2:
Add AD user "s3user" in AD server.

Step 3:
  1. echo "s3user_PASSWORD" > /etc/bindpasswd
Step 4:
  1. systemctl restart _gtway01
Step 5: ( create s3user token on RGW server):
  1. export RGW_ACCESS_KEY_ID=s3user
  2. export RGW_SECRET_ACCESS_KEY="*********"
  3. radosgw-token --encode --ttype=ad ### this create token to s3user below:
    ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQi**********

Step 6,
Create a s3 API python script to create bucket in Ceph storage:

  1. vi create_s3_connection_with_token.py
    #! /usr/bin/python
    import boto
    import boto.s3.connection

access_key = "ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQi**********"
secret_key = ''

conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'obj_gtway01', port = 7480,
is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)

bucket = conn.create_bucket('s3user-bucket4')
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)

Step 7,
Test the s3 API to create bucket:

[root@obj_gtway01 s3user]# ./create_s3_connection_with_token.py
s3user-bucket4 2018-12-01T01:44:36.901Z

Tips:
1). make sure the contains in ceph.conf matches setup in AD server.
2). make sure the user password in "/etc/bindpasswd" is one line with no empty space inside.

This issue tracker can be closed now!

-Ben

Actions

Also available in: Atom PDF