Project

General

Profile

Backport #19736

radosgw/s3 chunked transfer encodings and fast_forward_request

Added by Marcus Watts almost 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Release:
jewel
Crash signature (v1):
Crash signature (v2):

s3a-1.st (1.81 KB) Marcus Watts, 04/21/2017 06:28 AM

History

#1 Updated by Marcus Watts almost 7 years ago

I have fix for this problem.
https://github.com/ceph/civetweb/pull/18

I've tested this with swiftest, s3tests, & the problem hadoop-aws test case. Passes all 3 no problems.
Note this is just the PR for civetweb; there needs to be a companion one against ceph proper once it's been reviewed and committed to the ceph copy of civetweb (and has a stable commit-id).

#2 Updated by Ken Dreyer almost 7 years ago

  • Status changed from New to Fix Under Review
  • Backport set to jewel

#3 Updated by Ken Dreyer almost 7 years ago

  • Target version changed from v10.2.7 to v10.2.8
  • Backport deleted (jewel)

#4 Updated by Orit Wasserman almost 7 years ago

  • Status changed from Fix Under Review to Resolved

#5 Updated by Nathan Cutler over 6 years ago

  • Tracker changed from Bug to Backport
  • Description updated (diff)

description

fast_forward_request does not properly consume unread input when chunked transfer encodings are used. This causes a hadoop s3a test to fail (TestS3AFastOutputStream) because for some reason it tries to initiate a multipart upload using chunked encoding. Since the initiate operation doesn't have any input, the rest code doesn't consume the "0" that indicated there were in fact no chunks, so the 0 is instead seen as a separate invalid request.

I'm including a file here "s3a-1.st" which is a tiny piece of strace output of the test program showing it issuing the POST and what it gets back from radosgw as a result.

This only affects jewel; kraken & master have a newer version of civetweb that does not have this problem.

Also available in: Atom PDF