Project

General

Profile

Actions

Bug #53003

closed

Performance regression on rgw/s3 copy operation

Added by Casey Bodley over 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

100%

Source:
Tags:
copy
Backport:
pacific
Regression:
Yes
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

reported on by Tim Foerster:

we noticed a massive latency increase on object copy since the pacific
release. Prior pacific the copy operation finished in always less than a
second. The reproducer is quite simple:

```
s3cmd mb s3://moo
truncate -s 10G moo.img
s3cmd put moo.img s3://moo/hui --multipart-chunk-size-mb=5000

  1. expect the time to be less than a second (at least for our env)
  2. there is a huge gap between latest and latest-octopus )
    time s3cmd modify s3://moo/hui --add-header=x-amz-meta-foo3:Bar
    ```

We followed the developer instructions to spin-up the cluster and
bisected the following commit.

https://github.com/ceph/ceph/commit/99f7c4aa1286edfea6961b92bb44bb8fe22bd599

I'm not that involved to easily identify the cause from this commit, so
it looks more or less like the issue were introduced earlier it's just
getting used after the wide refactoring.


Related issues 1 (0 open1 closed)

Copied to rgw - Backport #53145: pacific: Performance regression on rgw/s3 copy operationResolvedActions
Actions

Also available in: Atom PDF