Project

General

Profile

Actions

Bug #49692

open

Decompression of object file which is larger than 15M

Added by yunkai zhou about 3 years ago. Updated over 2 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When using s3cmd to get a file larger than 15M, it failed to download and report an error "ERROR: S3 error: 403 (AccessDenied)".
Actually in this case, it will break into multipart object when put the file, but we does not save the value of compressor_message in the code, so the file cannot be decompressed when download unless set the value of compressor_messgae -15 as default.
This bug is a result from the patch of compressor_message, which is merged in version 16.1.0.
(below, we use the default value of multipart_chunk_size_mb=15M and enable_multipart=true in s3cmd configuration)

Actions #1

Updated by yunkai zhou about 3 years ago

yunkai zhou wrote:
Footnote:
my software environment is as follows,
the ceph version is 14.2.11 which is downloaded from https://download.ceph.com/tarballs/ceph-14.2.11.tar.gz, the compress mode of ceph object is zlib, the file named xml is larger than 15M.

then after make the source,
run the command "MON=3 OSD=1 MDS=1 RGW=1 ../src/vstart.sh -n -d --rgw_compression_zlib" to start a cluster,
run the command "s3cmd mb s3://test" to make a bucket,
run the command "s3cmd put xml s3://test" to put xml into the bucket,
run the command "s3cmd get s3://test/xml xml" to get xml from the bucket,
error occurs here.

Actions #2

Updated by Greg Farnum almost 3 years ago

  • Project changed from Ceph to rgw
Actions #3

Updated by Casey Bodley almost 3 years ago

it looks like these changes came in https://github.com/ceph/ceph/pull/34852. it doesn't look like anyone reviewed or tested for rgw

Actions #4

Updated by Casey Bodley over 2 years ago

  • Status changed from New to Need More Info

yunkai zhou wrote:

When using s3cmd to get a file larger than 15M, it failed to download and report an error "ERROR: S3 error: 403 (AccessDenied)".
Actually in this case, it will break into multipart object when put the file, but we does not save the value of compressor_message in the code, so the file cannot be decompressed when download unless set the value of compressor_messgae -15 as default.

i'm not seeing any issue with https://github.com/ceph/ceph/pull/34852 that added the compressor_message. it does store this compressor_message field with in the object's RGWCompressionInfo, and uses that stored value to decompress the object

Actions

Also available in: Atom PDF