Project

General

Profile

Bug #22731

AWS V4 signature issue on Jewel

Added by Yuan Zhou about 6 years ago. Updated about 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Using multiparted uploading fails on Jewel with aws-java-sdk-bundle-1.11.199.jar, v4 signature

the log from client:

18/01/18 14:49:33 INFO mapreduce.Job: Task Id : attempt_1516255624131_0003_m_000017_0, Status : FAILED
Error: java.nio.file.AccessDeniedException: benchmarks/TestDFSIO/io_data/test_io_117: Multi-part upload with id '2~3ysCY3O6rmNu9bjekXpYfyar4DDt6K3' to benchmarks/TestDFSIO/io_data/test_io_117 on benchmarks/TestDFSIO/io_data/test_io_117: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: tx000000000000000752570-005a6042fa-1598-default; S3 Extended Request ID: 1598-default-default), S3 Extended Request ID: 1598-default-default
              at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:174)
              at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:216)
              at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:549)
              at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:465)
              at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:355)
              at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
              at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
              at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:136)
              at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
              at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
              at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
              at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
              at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:422)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
              at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: tx000000000000000752570-005a6042fa-1598-default; S3 Extended Request ID: 1598-default-default), S3 Extended Request ID: 1598-default-default
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
              at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
              at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
              at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4229)
              at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4176)
              at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3213)
              at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3198)
              at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1301)
              at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:512)
              at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:503)
              at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
              at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:748)

more logs from RGW will updated soon.


Related issues

Duplicates rgw - Backport #20824: jewel: rgw: AWSv4 encoding/signature problem, can happen with listobjects marker. Resolved

History

#1 Updated by Matt Benjamin about 6 years ago

  • Status changed from New to Need More Info
  • Assignee set to Matt Benjamin

Hi Yuan,

could you send a snippet of the radosgw log at --debug-rgw=20 --debug-ms=1?

Matt

#2 Updated by Matt Benjamin about 6 years ago

I think this may be in issue fixed in more recent Jewel--could you perhaps check?

Matt

#3 Updated by Yuan Zhou about 6 years ago

Matt Benjamin wrote:

I think this may be in issue fixed in more recent Jewel--could you perhaps check?

Matt

Hi Matt, thanks for the look, sure I can provide more logs on RGW. I'm using the latest Jewel(v10.2.10) I think, do you mean the newer one on this branch? https://github.com/ceph/ceph/commits/jewel

#4 Updated by Matt Benjamin about 6 years ago

Hi Yuan,

I've been trying to reproduce against master--not seeing any error there

1. using the desired aws-java-sdk-bundle-1.11.199.jar (from hadoop-290)
2. using the appended java program (basically the AWS low-level api example)

Note that there is possible similarity to https://tracker.ceph.com/issues/22352

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.PutObjectRequest;

import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.services.s3.S3ClientOptions;

import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;

public class UploadObjectMultipart {

private static String bucketName     = "buckfoo";
private static String keyName = "my-key-name-1111";
private static String uploadFileName = "/home/mbenjamin/dev/rgw/javer/bigfile";
private static String accessKey = "SANITIZE";
private static String secretKey = "SANITIZE";
public static void main(String[] args) {
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);
//AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider(), clientConfig);
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 s3client = new AmazonS3Client(credentials); //
// TODO: can I set an explicit port?
s3client.setEndpoint("http://lemon.eng.arb.redhat.com");
s3client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
/* create bucket iff(!exist) */
try {
if(!(s3client.doesBucketExist(bucketName))) {
s3client.createBucket(new CreateBucketRequest(bucketName));
}
} catch (AmazonServiceException ase) {
System.out.println("AmazonServiceException in createBucket");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("AmazonClientException (e.g., network error");
System.out.println("Error Message: " + ace.getMessage());
}
/* multipart upload */
try {
System.out.println("init multipart upload\n");
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest =
new InitiateMultipartUploadRequest(bucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3client.initiateMultipartUpload(initRequest);
File file = new File(uploadFileName);
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName).withKey(keyName)
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
s3client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest =
new CompleteMultipartUploadRequest(
bucketName,
keyName,
initResponse.getUploadId(),
partETags);
s3client.completeMultipartUpload(compRequest);
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which " +
"means your request made it " +
"to Amazon S3, but was rejected with an error response" +
" for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which " +
"means the client encountered " +
"an internal error while trying to " +
"communicate with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
} /* main */

}

On master, I don't see any issue, and specifically (relative to issue 22352):
a) the client is sending encoded '~'
2018-01-23 15:16:34.351 7fe031ca8700 20 QUERY_STRING=uploadId=2%7EkZ6bFNmJkprhqdh04tQzRJ3ewcKUtJU&partNumber=2

b) RGW is decoding it
2018-01-23 15:16:34.351 7fe031ca8700 10 payload request hash = STREAMING-AWS4-HMAC-SHA256-PAYLOAD
2018-01-23 15:16:34.351 7fe031ca8700 10 canonical request = PUT
/buckfoo/my-key-name-1111
partNumber=2&uploadId=2~kZ6bFNmJkprhqdh04tQzRJ3ewcKUtJU
amz-sdk-invocation-id:81b360cf-9c1f-2ba0-bbb0-c8f423f5b26e
amz-sdk-retry:0/0/500
content-length:2726072
content-type:application/octet-stream
host:lemon.eng.arb.redhat.com
user-agent:aws-sdk-java/1.11.199 Linux/4.14.11-300.fc27.x86_64 OpenJDK_64-Bit_Server_VM/25.151-b12 java/1.8.0_151
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD
x-amz-date:20180123T201634Z
x-amz-decoded-content-length:2724096

#5 Updated by Matt Benjamin about 6 years ago

I have verified identical results (successful multipart uploads from application using aws-java-sdk-bundle-1.11.199.jar), using our downstream version of the Jewel branch (a non-public Jewel which I had in place and ready to test).

I'm now setting up a test environment using v10.2.10

Matt

#6 Updated by Yuan Zhou about 6 years ago

Hi Matt,

I've also got some updates:

changing the signature type to v2 can work around this issue.

  <property>
    <name>fs.s3a.signing-algorithm</name>
    <value>S3SignerType</value>
    <description>Override the default signing algorithm so legacy
    implementations can still be used</description>
  </property>

On V4 auth, here's the RGW log on those 403 requests

2018-01-24 08:36:35.483619 7f6e48fd9700  1 ====== starting new request req=0x7f6e48fd3710 =====
2018-01-24 08:36:35.483631 7f6e48fd9700  2 req 27066772:0.000012::PUT /bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97::init
ializing for trans_id = tx0000000000000019d0194-005a67d513-1b2a-default
2018-01-24 08:36:35.483636 7f6e48fd9700 10 rgw api priority: s3=5 s3website=4
2018-01-24 08:36:35.483637 7f6e48fd9700 10 host=rgw.intel.com
2018-01-24 08:36:35.483639 7f6e48fd9700 20 subdomain= domain= in_hosted_domain=0 in_hosted_domain_s3website=0
2018-01-24 08:36:35.483640 7f6e48fd9700 20 final domain/bucket subdomain= domain= in_hosted_domain=0 in_hosted_domain_s3website=0 s
->info.domain= s->info.request_uri=/bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97
2018-01-24 08:36:35.483647 7f6e48fd9700 10 meta>> HTTP_X_AMZ_CONTENT_SHA256
2018-01-24 08:36:35.483651 7f6e48fd9700 10 meta>> HTTP_X_AMZ_DATE
2018-01-24 08:36:35.483653 7f6e48fd9700 10 meta>> HTTP_X_AMZ_DECODED_CONTENT_LENGTH
2018-01-24 08:36:35.483656 7f6e48fd9700 10 x>> x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD
2018-01-24 08:36:35.483656 7f6e48fd9700 10 x>> x-amz-date:20180124T003635Z
2018-01-24 08:36:35.483657 7f6e48fd9700 10 x>> x-amz-decoded-content-length:104857600
2018-01-24 08:36:35.483669 7f6e48fd9700 20 get_handler handler=22RGWHandler_REST_Obj_S3
2018-01-24 08:36:35.483671 7f6e48fd9700 10 handler=22RGWHandler_REST_Obj_S3
2018-01-24 08:36:35.483672 7f6e48fd9700  2 req 27066772:0.000053:s3:PUT /bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97::ge
tting op 1
2018-01-24 08:36:35.483675 7f6e48fd9700 10 op=21RGWPutObj_ObjStore_S3
2018-01-24 08:36:35.483676 7f6e48fd9700  2 req 27066772:0.000056:s3:PUT /bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97:put
_obj:authorizing
2018-01-24 08:36:35.483686 7f6e48fd9700 10 v4 signature format = 2717449c0cc9a3742f12d233e90cfc1df303822ebf29a55a24ae48479935f2c6
2018-01-24 08:36:35.483689 7f6e48fd9700 10 v4 credential format = PM28VPB305FV3XV1UTN7/20180124/us-east-1/s3/aws4_request
2018-01-24 08:36:35.483690 7f6e48fd9700 10 access key id = PM28VPB305FV3XV1UTN7
2018-01-24 08:36:35.483690 7f6e48fd9700 10 credential scope = 20180124/us-east-1/s3/aws4_request
2018-01-24 08:36:35.483738 7f6e48fd9700 10 canonical headers format = amz-sdk-invocation-id:8230276f-798e-1fae-c490-0acd1923ea53
amz-sdk-retry:8/10758/435
content-length:104929686
content-type:application/octet-stream
host:rgw.intel.com:8080
user-agent:Hadoop 2.9.0, aws-sdk-java/1.11.199 Linux/3.10.0-514.21.2.el7.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11 java/1
.8.0_131
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD
x-amz-date:20180124T003635Z
x-amz-decoded-content-length:104857600

2018-01-24 08:36:35.483742 7f6e48fd9700 10 body content detected in multiple chunks
2018-01-24 08:36:35.483744 7f6e48fd9700 10 payload request hash = STREAMING-AWS4-HMAC-SHA256-PAYLOAD
2018-01-24 08:36:35.483771 7f6e48fd9700 10 canonical request = PUT
/bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97
partNumber=1&uploadId=2%7Eq-iUMGDcjEMkCoNNajbmvLtlnD8fGM8
amz-sdk-invocation-id:8230276f-798e-1fae-c490-0acd1923ea53
amz-sdk-retry:8/10758/435
content-length:104929686
content-type:application/octet-stream
host:rgw.intel.com:8080
user-agent:Hadoop 2.9.0, aws-sdk-java/1.11.199 Linux/3.10.0-514.21.2.el7.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11 java/1
.8.0_131
x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD
x-amz-date:20180124T003635Z
x-amz-decoded-content-length:104857600

amz-sdk-invocation-id;amz-sdk-retry;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-conte
nt-length
STREAMING-AWS4-HMAC-SHA256-PAYLOAD
2018-01-24 08:36:35.483773 7f6e48fd9700 10 canonical request hash = 18826bc1c6d331f3d17e4d202b6253fde9f9097bb2b8ced020711980b3b35a8
8
2018-01-24 08:36:35.483775 7f6e48fd9700 10 string to sign = AWS4-HMAC-SHA256
20180124T003635Z
20180124/us-east-1/s3/aws4_request
18826bc1c6d331f3d17e4d202b6253fde9f9097bb2b8ced020711980b3b35a88
2018-01-24 08:36:35.483801 7f6e48fd9700 10 date_k        = 174790f1d2d432d80a714f64e3842b5e34754e6b56e40212cf327b1df4786df5
2018-01-24 08:36:35.483815 7f6e48fd9700 10 region_k      = 3094fc263b676703e9b46dcc5ac12e2c5c11fbaafac8474e8b00f4a993e17f7f
2018-01-24 08:36:35.483831 7f6e48fd9700 10 service_k     = 495d07f53ee5e0ac2326bc96dff24208090916b3324b406365a2d2fe4f860626
2018-01-24 08:36:35.483846 7f6e48fd9700 10 signing_k     = de546a39f51dc0bffc8f0355befe25280a0cb629cf5e8ac79e01d9ac8f0e5fc3
2018-01-24 08:36:35.483861 7f6e48fd9700 10 signature_k   = 4588cb68aa2ef48b51872a422920e20874b57a4962fc7b25bb57bcc5c95b575a
2018-01-24 08:36:35.483863 7f6e48fd9700 10 new signature = 4588cb68aa2ef48b51872a422920e20874b57a4962fc7b25bb57bcc5c95b575a
2018-01-24 08:36:35.483864 7f6e48fd9700 10 ----------------------------- Verifying signatures
2018-01-24 08:36:35.483864 7f6e48fd9700 10 Signature     = 2717449c0cc9a3742f12d233e90cfc1df303822ebf29a55a24ae48479935f2c6
2018-01-24 08:36:35.483865 7f6e48fd9700 10 New Signature = 4588cb68aa2ef48b51872a422920e20874b57a4962fc7b25bb57bcc5c95b575a
2018-01-24 08:36:35.483865 7f6e48fd9700 10 -----------------------------
2018-01-24 08:36:35.483867 7f6e48fd9700 10 ERROR: AWS4 seed signature does NOT match!
2018-01-24 08:36:35.483869 7f6e48fd9700 10 failed to authorize request
2018-01-24 08:36:35.483870 7f6e48fd9700 20 handler->ERRORHANDLER: err_no=-2027 new_err_no=-2027
2018-01-24 08:36:35.483909 7f6e48fd9700  2 req 27066772:0.000289:s3:PUT /bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97:put
_obj:op status=0
2018-01-24 08:36:35.483912 7f6e48fd9700  2 req 27066772:0.000293:s3:PUT /bdaas-demo-dfs/benchmarks/TestDFSIO/io_data/test_io_97:put
_obj:http status=403
2018-01-24 08:36:35.483915 7f6e48fd9700  1 ====== req done req=0x7f6e48fd3710 op status=0 http_status=403 ======

#7 Updated by Matt Benjamin about 6 years ago

Hi Yuan,

Yes, I've reproduced this issue on v10.2.10, as you do. This is good news, upstream is missing a fix already backported to a downstream branch based on Jewel. I'll track it down tomorrow.

Thanks for all your help!

Matt

#8 Updated by Matt Benjamin about 6 years ago

The following backport PR against Jewel resolves the issue for me:

https://github.com/ceph/ceph/pull/20123

#9 Updated by Yuan Zhou about 6 years ago

Matt Benjamin wrote:

The following backport PR against Jewel resolves the issue for me:

https://github.com/ceph/ceph/pull/20123

Thanks for the fix! I'll try and report back

#10 Updated by Nathan Cutler about 6 years ago

So this is a duplicate of #20463

#11 Updated by Matt Benjamin about 6 years ago

I believe so, but waiting for confirmation

Matt

#12 Updated by Yuan Zhou about 6 years ago

Matt Benjamin wrote:

The following backport PR against Jewel resolves the issue for me:

https://github.com/ceph/ceph/pull/20123

Hi Matt, the patch above does fix the signature mismatch issue. Thanks!

-yuan

#13 Updated by Matt Benjamin about 6 years ago

older backport PR--one of these will merge shortly
https://github.com/ceph/ceph/pull/17731

#14 Updated by Matt Benjamin about 6 years ago

  • Status changed from Need More Info to 15

#15 Updated by Nathan Cutler about 6 years ago

  • Duplicates Backport #20824: jewel: rgw: AWSv4 encoding/signature problem, can happen with listobjects marker. added

#16 Updated by Nathan Cutler about 6 years ago

  • Status changed from 15 to Duplicate

Also available in: Atom PDF