Actions
Bug #53003
closedPerformance regression on rgw/s3 copy operation
% Done:
100%
Source:
Tags:
copy
Backport:
pacific
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
reported on dev@ceph.io by Tim Foerster:
we noticed a massive latency increase on object copy since the pacific
release. Prior pacific the copy operation finished in always less than a
second. The reproducer is quite simple:```
s3cmd mb s3://moo
truncate -s 10G moo.img
s3cmd put moo.img s3://moo/hui --multipart-chunk-size-mb=5000
- expect the time to be less than a second (at least for our env)
- there is a huge gap between latest and latest-octopus )
time s3cmd modify s3://moo/hui --add-header=x-amz-meta-foo3:Bar
```We followed the developer instructions to spin-up the cluster and
bisected the following commit.https://github.com/ceph/ceph/commit/99f7c4aa1286edfea6961b92bb44bb8fe22bd599
I'm not that involved to easily identify the cause from this commit, so
it looks more or less like the issue were introduced earlier it's just
getting used after the wide refactoring.
Actions