Project

General

Profile

Actions

Bug #21552

closed

put 8M multipart object generates 0 sized tail object

Added by Honggang Yang over 6 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

description

I put one 8M object through multipart upload.
part_size is 8M and stripe_max_size is 4M.
I expect two 4M rados objects in my data pool.
But there are three, two 4M objects and one 0 sized tail object.

[root@hehe home]# rados -p dpool --cluster xtao ls                                                     
xtao-xtao.15386.3__shadow_bigfilehaha.2~TYdtbkxyIMfEaT-Ch32te9ik2MHBX3O.1_1  <---------<<< 4M                          
xtao-xtao.15386.3__shadow_bigfilehaha.2~TYdtbkxyIMfEaT-Ch32te9ik2MHBX3O.1_2  <---------<<< 0 sized obj                         
xtao-xtao.15386.3__multipart_bigfilehaha.2~TYdtbkxyIMfEaT-Ch32te9ik2MHBX3O.1 <----------<< 4M

reproduce

- generate a 8M local file

$ dd if=/dev/urandom of=./8M bs=1M count=2

- put obj with boto

import math, os
import boto
import boto.s3.connection
from filechunkio import FileChunkIO

from key import *

c = boto.connect_s3(
        aws_access_key_id=CONS_AK,
        aws_secret_access_key=CONS_SK,
        host="192.168.1.111",
        port=99,
        is_secure=False,
        calling_format=boto.s3.connection.OrdinaryCallingFormat(),
        )

b = c.create_bucket('mybucket')

# Create a multipart upload request
mul_key = 'bigfilehaha'
header = {  
    'x-amz-meta-joseph': 'Yang Honggang'  
}  
mp = b.initiate_multipart_upload(mul_key, headers=header)  

# Local file path
source_path = './8M'
source_size = os.stat(source_path).st_size

chunk_size = 8*1024*1024 
chunk_count = int(math.ceil(source_size / float(chunk_size)))  

part_num = 1 

# Send the file parts 
for i in range(chunk_count):  
    offset = chunk_size * i  
    bytes = min(chunk_size, source_size - offset)  
    with FileChunkIO(source_path, 'r', offset=offset,  
            bytes=bytes) as fp:  
        print "part ", part_num 
        mp.upload_part_from_file(fp, part_num)
        part_num += 1

# Finish the upload  
mp.complete_upload()
Actions #1

Updated by Honggang Yang over 6 years ago

And the tail 0 sized object can not be deleted by gc process.

Actions #3

Updated by Honggang Yang over 6 years ago

The latest master branch already fix this bug. I can only reproduce this bug on my 12.0.0 rgw.

Actions #4

Updated by Casey Bodley over 2 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF