Project

General

Profile

Backport #19736

Updated by Nathan Cutler almost 7 years ago

https://github.com/ceph/ceph/pull/14776 fast_forward_request does not properly consume unread input when chunked transfer encodings are used.    This causes a hadoop s3a test to fail (TestS3AFastOutputStream) because for some reason it tries to initiate a multipart upload using chunked encoding.    Since the initiate operation doesn't have any input, the rest code doesn't consume the "0" that indicated there were in fact no chunks, so the 0 is instead seen as a separate invalid request. 

 I'm including a file here "s3a-1.st" which is a tiny piece of strace output of the test program showing it issuing the POST and what it gets back from radosgw as a result. 

 This only affects jewel; kraken & master have a newer version of civetweb that does not have this problem.

Back