Project

General

Profile

Bug #8766

multipart minimum size error should be EntityTooSmall

Added by Josh Durgin over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
Start date:
07/07/2014
Due date:
% Done:

80%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Release:
Needs Doc:
No

Description

This is already in the code as ERR_TOO_SMALL, but is reported to stop any response instead of sending the error on firefly: http://thread.gmane.org/gmane.comp.file-systems.ceph.user/11367/focus=11433

There should be an s3test for this too.

History

#1 Updated by Sage Weil over 3 years ago

  • Status changed from New to Verified
  • Priority changed from Normal to High

#2 Updated by Ian Colle about 3 years ago

  • Assignee set to Luis Pabon

#3 Updated by Luis Pabon about 3 years ago

Starting to look into this...

#4 Updated by Luis Pabon about 3 years ago

  • % Done changed from 0 to 80

I have added a test to s3-test to check for EntityTooSmall and it passes on the current code. According to AWS an S3 server can return HTTP code 400 with the error code set to EntityTooSmall, and that is what the current RadosGW does.

Please let me know if I am mistaken, but it seems that the code returns the correct information. I will add the new test to s3-test.

For your information, here is the patch I am going to be sending to s3-test:

diff --git a/s3tests/functional/test_s3.py b/s3tests/functional/test_s3.py
index 44db1f1..31a0f0d 100644
--- a/s3tests/functional/test_s3.py
+++ b/s3tests/functional/test_s3.py
@@ -4113,14 +4113,12 @@ def transfer_part(bucket, mp_id, mp_keyname, i, part):
     part_out = StringIO(part)
     mp.upload_part_from_file(part_out, i+1)

-def generate_random(size):
+def generate_random(size, part_size=5*1024*1024):
     """ 
-    Generate the specified number of megabytes of random data.
+    Generate the specified number random data.
     (actually each MB is a repetition of the first KB)
     """ 
-    mb = 1024 * 1024
     chunk = 1024
-    part_size = 5 * mb
     allowed = string.ascii_letters
     for x in range(0, size, part_size):
         strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in xrange(chunk)])
@@ -4133,14 +4131,14 @@ def generate_random(size):
         if (x == size):
             return

-def _multipart_upload(bucket, s3_key_name, size, do_list=None, headers=None, metadata=None):
+def _multipart_upload(bucket, s3_key_name, size, part_size=5*1024*1024, do_list=None, headers=None, metadata=None):
     """ 
     generate a multi-part upload for a random file of specifed size,
     if requested, generate a list of the parts
     return the upload descriptor
     """ 
     upload = bucket.initiate_multipart_upload(s3_key_name, headers=headers, metadata=metadata)
-    for i, part in enumerate(generate_random(size)):
+    for i, part in enumerate(generate_random(size, part_size)):
         transfer_part(bucket, upload.id, upload.key_name, i, part)

     if do_list is not None:
@@ -4196,6 +4194,19 @@ def test_multipart_upload_multiple_sizes():

 @attr(resource='object')
 @attr(method='put')
+@attr(operation='check failure on multiple multi-part upload with size too small')
+@attr(assertion='fails 400')
+def test_multipart_upload_size_too_small():
+    bucket = get_new_bucket()
+    key="mymultipart" 
+    upload = _multipart_upload(bucket, key, 100 * 1024, part_size=10*1024)
+    e = assert_raises(boto.exception.S3ResponseError, upload.complete_upload)
+    eq(e.status, 400)
+    eq(e.error_code, u'EntityTooSmall')
+
+@attr(resource='object')
+@attr(method='put')
 @attr(operation='check contents of multi-part upload')
 @attr(assertion='successful')
 def test_multipart_upload_contents():

#5 Updated by Yehuda Sadeh about 3 years ago

Maybe the problem is that we don't send the xml body with the appropriate error?

#6 Updated by Luis Pabon about 3 years ago

Here is the response from the gateway:

======================================================================
ERROR: s3tests.functional.test_s3.test_multipart_upload_size_too_small
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/lpabon/git/ceph/s3-tests/virtualenv/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/lpabon/git/ceph/s3-tests/s3tests/functional/test_s3.py", line 4203, in test_multipart_upload_size_too_small
    upload.complete_upload()
  File "/home/lpabon/git/ceph/s3-tests/virtualenv/lib/python2.7/site-packages/boto/s3/multipart.py", line 319, in complete_upload
    self.id, xml)
  File "/home/lpabon/git/ceph/s3-tests/virtualenv/lib/python2.7/site-packages/boto/s3/bucket.py", line 1806, in complete_multipart_upload
    response.status, response.reason, body)
S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?><Error><Code>EntityTooSmall</Code></Error>

According to http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html the error above seems to be fine. It can also be parsed correctly by boto.

#7 Updated by Josh Durgin about 3 years ago

The issue was reported on firefly - does it have the same behavior as master, or is there something that should be backported?

#8 Updated by Luis Pabon about 3 years ago

Josh Durgin wrote:

The issue was reported on firefly - does it have the same behavior as master, or is there something that should be backported?

Good question, I'll take a look.

#9 Updated by Luis Pabon about 3 years ago

I have submitted the following patches:

Update s3-tests with the new small size multipart tests:
https://github.com/ceph/s3-tests/pull/21

Update vstart.sh to setup the users for s3-test in ceph master branch:
https://github.com/ceph/ceph/pull/2712

#10 Updated by Yehuda Sadeh about 3 years ago

  • Status changed from Verified to Resolved

Tested on firefly, seem to work.

Also available in: Atom PDF