Feature #11326

rgw: make MultiDelete faster/parallelized

Added by renato cron over 8 years ago. Updated over 4 years ago.

Target version:
% Done:


Community (user)
Affected Versions:
Pull request ID:


I'm using DreamObejcts (with is using Ceph as they fs) and it's very slow MultiDelete compared to AWS. Ceph is doing 10 deletes/second using multidelete on 500 objects.

So I decided to make the MultiDelete run parallel, but, each time I do more than one request at the same time, it return 500 (after some time, is not fast fail)

I don't have Ceph on my machine, so I can't test, but I guess someone here is capable of testing it.


#1 Updated by Sage Weil over 8 years ago

  • Project changed from Ceph to rgw
  • Priority changed from High to Urgent

#2 Updated by Yehuda Sadeh over 8 years ago

the 500 is a symptom of apache timing out. The delete operation takes too long, so apache is timing out.

#3 Updated by renato cron over 8 years ago

Hmm, good!

I'll report it to dreamcompute.

Somethings to do so:

1: Change 500 to 408 Request Timeout Error.

2: Find a way to make delete less slow.

In my opinion, Ceph is design to scale horizontally, so, delete should scale as good as write.

I think I will need distribute my system to use a bucket for each hour, eg: "foo:00", "foo:01", ..., "foo:23" so I can do parallel requests and maybe the servers are different.

But, to know that for sure, I need to known: How 'buckets' are distributed on Ceph?

Thank you for your time.

#4 Updated by Sage Weil over 8 years ago

  • Tracker changed from Bug to Feature
  • Subject changed from Concurrent delete using MultiDelete s3 API causes 500 to rgw: make MultiDelete faster/parallelized
  • Source changed from other to Community (user)

#5 Updated by Casey Bodley over 4 years ago

  • Priority changed from Urgent to Normal

Also available in: Atom PDF