Project

General

Profile

Actions

Bug #20527

closed

v2 presigned URLs don't work with radosgw.

Added by Marcus Watts almost 7 years ago. Updated over 2 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

s3cmd only supports v2 style presigned URLs (see https://github.com/s3tools/s3cmd/issues/765 ).
Also, it only supports "virtual host style" access, not path access.

Accepting these limitations, for a command like,
s3cmd signurl s3://my-new-bucket/services `date -d 'now + 1 year' +%s`
the output will be something like,
http://my-new-bucket.hybodus.eng.arb.redhat.com/services?AWSAccessKeyId=40JV0D93WB9U33YNQ0
L4&Expires=1530850939&Signature=HyCxD1QmQ5warS41aB%2B8YbPcYfE%3D

When this is given to ceph, the result will be:
<Error><Code>SignatureDoesNotMatch</Code><RequestId>
tx000000000000000000083-00595dbb60-1092-default</RequestId><HostId>1092-default-default</H
ostId></Error>

It's possible to munge that path into something that works with ceph, such as,
http://hybodus.eng.arb.redhat.com/my-new-bucket/services?AWSAccessKeyId=40JV0D93WB9U33YNQ0L4&Expires=1530850939&Signature=HyCxD1QmQ5warS41aB%2B8YbPcYfE%3D
However, there isn't any way to configure s3cmd to cause this to happen; prepending the bucket onto the hostname is hard-wired into s3cmd and happens even if the hostname is an IP address.

By appending "--debug" to the s3cmd command line, near the end, something like this will appear:
DEBUG: Signing plaintext: u'GET\n\n\n1530851096\n/my-new-bucket/services'

By running radosgw with "debug rgw = 20", it in turn reports:
2017-07-05 23:55:27.368575 7f74102f2700 15 string_to_sign=GET

1530849206
/services

According to "http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html",
when doing "virtual hosted style" access, the path has to be prefixed with the bucket name. Clearly, ceph is not doing this.

Actions #1

Updated by Marcus Watts almost 7 years ago

I should mention: awscli does v4 signatures by default, but when properly coaxed (by giving it a specially prepared "endpoints.json" file), it can be coerced into doing v2 signatures. The url it emits is a faulty host based url: "bucket.s3.amazonaws.com", but the signature formed is using /bucket/object matching the behavior of s3cmd. And just as in the s3cmd case the signature works with radosgw when the URL is munged into a valid path style URL.

Giving awscli a differently prepared "endpoints.json" file (or using --endpoint=) causes it to generate v4 signatures instead, which work fine with ceph.

Actions #2

Updated by Robin Johnson almost 7 years ago

Either you're doing something weird, or it's broken between Jewel & master.

Here is working output on Jewel (not quite 10.2.7, 10.2.6 + most RGW stuff backported by me before the release) it DOES work to generate Vhost calling-format URLs, and you can manually re-arrange it to have working path-format access.

$ .../s3cmd -c $MYCFG signurl s3://robjoh84-congress-private-test/testfile-private `date -d 'now + 1 year' +%s`
http://robjoh84-congress-private-test.objects-us-west-1.dream.io/testfile-private?AWSAccessKeyId=TUqXKJSgG_c8wRL4222l&Expires=1530857403&Signature=lPU7VxM9hWVAlAUDSbIQDPlo2e8%3D

Feel free to check it.

And manually re-arrange it:
http://objects-us-west-1.dream.io/robjoh84-congress-private-test/testfile-private?AWSAccessKeyId=TUqXKJSgG_c8wRL4222l&Expires=1530857403&Signature=lPU7VxM9hWVAlAUDSbIQDPlo2e8%3D

Running s3cmd v1.6.1-2-gf0340a6, which is 1.6.1 w/ a minor packaging fix.

Actions #3

Updated by Matt Benjamin almost 7 years ago

@mwatts, can you update with info from last night?

Matt

Actions #4

Updated by Matt Benjamin almost 7 years ago

  • Status changed from New to In Progress
  • Assignee set to Marcus Watts
Actions #5

Updated by Marcus Watts almost 7 years ago

I checked: jewel has the same behavior.

I don't think Robin is describing different behavior than what I saw. We agree: it works if you manually rearrange the URL to force path style access. So the only question is whether that behavior is a bug or a feature.

Actions #6

Updated by Robin Johnson almost 7 years ago

In my case v2 worked in both calling formats

Actions #7

Updated by Robin Johnson almost 7 years ago

@mwatts I can't get it to fail in my production setups, maybe your setup had a weird rgw dns name or zone config?

Actions #8

Updated by William Schroeder almost 7 years ago

Hello!

My team is trying to update from Hammer to Jewel, and we believe we have run into this issue. We are testing with s3curl (https://github.com/rtdp/s3curl/blob/master/s3curl.pl), which uses the V2 signature. The command we use (apologies for vaguery here):

s3curl --id the_test_user_profile -- -s http://region_url_mapped_to_rgw:7480/mytestbucket/sometest.file

This results in a 403. I turned on

debug rgw = 20
and confirmed that "calculated digest" and "auth_sign" are different values. My not-yet-upgraded RGW servers still respond successfully with my object. Note that to call to them, I modify my /etc/hosts file such that "region_url_mapped_to_rgw" points to their respective IPs.

How far along are you on figuring out where the algorithm changed?

Actions #9

Updated by William Schroeder almost 7 years ago

I was able to reproduce this by running s3curl against demo Docker containers.

ceph/demo:tag-build-master-hammer-ubuntu-14.04 returns successfully
ceph/demo:tag-build-master-jewel-ubuntu-14.04 returns the 403

$ cat test_ceph.sh
#!/bin/bash

version=$1

if [ "$version" == "" ]; then
    echo "Specify hammer or jewel as a parameter" 
    exit 1
fi

CEPH_IMAGE="ceph/demo:tag-build-master-${version}-ubuntu-14.04" 

docker rm -f ceph
docker run -p 7280:8080 -d -e RGW_CIVETWEB_PORT=8080 -e MON_IP=127.0.0.1 -e CEPH_PUBLIC_NETWORK=172.18.0.0/24 --name ceph ${CEPH_IMAGE}
docker exec ceph radosgw-admin user create --uid=uat_admin --display-name="Ceph user for testing" --access-key=CEPHUSER --secret=thisisasecret --access=full --system
$ cat ~/.s3curl
@endpoints = (
  'localhost',
);

%awsSecretAccessKeys = (
    sigtester => {
      id => 'CEPHUSER',
      key => 'thisisasecret',
    },
);
$ ./test_ceph.sh hammer
...

$ s3curl --id sigtester http://localhost:7280
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>uat_admin</ID><DisplayName>Ceph user for testing</DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
$ ./test_ceph.sh jewel
...

$ s3curl --id sigtester http://localhost:7280
<?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx000000000000000000001-0059764eee-100d-default</RequestId><HostId>100d-default-default</HostId></Error>
Actions #10

Updated by William Schroeder almost 7 years ago

Discovered that Hammer's auth_hdr debug output ends with:

/mytestbucket/sometest.file

But in Jewel, it ends with:

/region_url_mapped_to_rgw/mytestbucket/sometest.file

This is why the signatures do not match. Jewel incorrectly includes the host, probably assuming it is the bucket? (Reading above suggests folks already knew that.)

Actions #11

Updated by William Schroeder over 6 years ago

We can ignore my comments here. They are caused by an incorrect configuration of "rgw dns name" in /etc/ceph/ceph.conf. RGW uses that to parse out the host and detect bucket vs host.

Actions #12

Updated by Casey Bodley over 2 years ago

  • Status changed from In Progress to Can't reproduce
Actions

Also available in: Atom PDF