Bug #15542
rgw multisite: delete object issues
0%
Description
doing a multipart upload and delete from a secondary zone often causes wierdness if the object does not delete fast enough in the primary zone:
(openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us put test-mp/test.img filename=test.img # this is a file of ~40 MB.. so libs3 will do a mp-upload (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000021d6-0057164810-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:00:17Z 40M (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us delete test-mp/test.img (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx000000000000000002283-005716481e-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:00:45Z 40M (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us delete test-mp/test.img (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022b1-0057164826-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022f2-005716482c-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022f7-005716482d-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx00000000000000000256d-005716486c-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:01:52Z 40M
Related issues
History
#1 Updated by Yehuda Sadeh almost 8 years ago
- Priority changed from Normal to Urgent
- Backport set to jewel
#2 Updated by Yehuda Sadeh almost 8 years ago
#3 Updated by Ken Dreyer almost 8 years ago
Yehuda I think this should be "pending backport" now?
#4 Updated by Nathan Cutler almost 8 years ago
- Backport deleted (
jewel)
https://github.com/ceph/ceph/8772 is already slated for backport in #15565
#5 Updated by Yehuda Sadeh almost 8 years ago
- Status changed from New to Resolved
#6 Updated by Abhishek Lekshmanan almost 8 years ago
Still seeing some issues on master from about a week ago a41860a501a245b57510543946cb1d2a24b61342, uploaded a portion of logs, here for eg. the object in question is delete-me/del19 which gets synced back (nearly all objects synced back after deletion in this case too)
#7 Updated by Abhishek Lekshmanan almost 8 years ago
- File rgw-del19-logs.tar.gz added
Abhishek Lekshmanan wrote:
Still seeing some issues on master from about a week ago a41860a501a245b57510543946cb1d2a24b61342, uploaded a portion of logs, here for eg. the object in question is delete-me/del19 which gets synced back (nearly all objects synced back after deletion in this case too)
Adding the logs again
#8 Updated by Yehuda Sadeh almost 8 years ago
I think there's no way around adding tombstones (that will be removed later).
#9 Updated by Abhishek Lekshmanan almost 8 years ago
- File final-obj.log View added
logs of an object which gets resynced, on the primary zone.
#10 Updated by Yehuda Sadeh almost 8 years ago
- Status changed from Resolved to In Progress
#11 Updated by Yehuda Sadeh almost 8 years ago
- Status changed from In Progress to Resolved
#12 Updated by Yehuda Sadeh almost 8 years ago
- Related to Bug #16143: rgw: concurrent sync on the same object added
#13 Updated by Yehuda Sadeh almost 8 years ago
Created issue #16143 to track issue reported by Abhishek.
#14 Updated by Ken Dreyer almost 8 years ago
- Status changed from Resolved to Pending Backport
- Backport set to jewel
This affects jewel and should be backported there.
#16 Updated by Nathan Cutler almost 8 years ago
Ken Dreyer wrote:
This affects jewel and should be backported there.
The only fix I could find for this issue is https://github.com/ceph/ceph/8772 but that one is already slated for backport in #15565 (as noted in comment #4) - that's why I originally removed the jewel backport from this issue. If you agree, I'll change the status on this to "Duplicate".
#17 Updated by Abhishek Lekshmanan almost 8 years ago
Ken Dreyer wrote:
This affects jewel and should be backported there.
As Nathan said, the original pr (#8772) is backported to jewel as https://github.com/ceph/ceph/pull/9017 which was tracked at http://tracker.ceph.com/issues/15802
#18 Updated by Nathan Cutler almost 8 years ago
- Status changed from Pending Backport to Duplicate
- Backport deleted (
jewel)
#20 Updated by Nathan Cutler almost 8 years ago
- Duplicates Bug #15625: rgw: segfault at RGWAsyncGetSystemObj added
#21 Updated by Nathan Cutler almost 8 years ago
It appears the master PR in this case - https://github.com/ceph/ceph/pull/8772 - fixes a number of issues. I arbitrarily picked #15625 to be the "one issue to bind them all".