Bug #15542
closedrgw multisite: delete object issues
0%
Description
doing a multipart upload and delete from a secondary zone often causes wierdness if the object does not delete fast enough in the primary zone:
(openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us put test-mp/test.img filename=test.img # this is a file of ~40 MB.. so libs3 will do a mp-upload (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000021d6-0057164810-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:00:17Z 40M (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us delete test-mp/test.img (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx000000000000000002283-005716481e-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:00:45Z 40M (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us delete test-mp/test.img (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022b1-0057164826-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022f2-005716482c-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx0000000000000000022f7-005716482d-3735-us-west Content-Length: 232 Key Last Modified Size -------------------------------------------------- -------------------- ----- (openstack)[abhishekl@pxe-sup3:/ssd/builds/cpp/ceph/src](⎇ rgw/multisite-delete)$ s3 -us list test-mp Content-Type: application/xml Request-Id: tx00000000000000000256d-005716486c-3735-us-west Content-Length: 499 Key Last Modified Size -------------------------------------------------- -------------------- ----- test.img 2016-04-19T13:01:52Z 40M
Files
Updated by Yehuda Sadeh about 8 years ago
- Priority changed from Normal to Urgent
- Backport set to jewel
Updated by Yehuda Sadeh almost 8 years ago
Updated by Ken Dreyer almost 8 years ago
Yehuda I think this should be "pending backport" now?
Updated by Nathan Cutler almost 8 years ago
- Backport deleted (
jewel)
https://github.com/ceph/ceph/8772 is already slated for backport in #15565
Updated by Abhishek Lekshmanan almost 8 years ago
Still seeing some issues on master from about a week ago a41860a501a245b57510543946cb1d2a24b61342, uploaded a portion of logs, here for eg. the object in question is delete-me/del19 which gets synced back (nearly all objects synced back after deletion in this case too)
Updated by Abhishek Lekshmanan almost 8 years ago
- File rgw-del19-logs.tar.gz rgw-del19-logs.tar.gz added
Abhishek Lekshmanan wrote:
Still seeing some issues on master from about a week ago a41860a501a245b57510543946cb1d2a24b61342, uploaded a portion of logs, here for eg. the object in question is delete-me/del19 which gets synced back (nearly all objects synced back after deletion in this case too)
Adding the logs again
Updated by Yehuda Sadeh almost 8 years ago
I think there's no way around adding tombstones (that will be removed later).
Updated by Abhishek Lekshmanan almost 8 years ago
- File final-obj.log final-obj.log added
logs of an object which gets resynced, on the primary zone.
Updated by Yehuda Sadeh almost 8 years ago
- Status changed from Resolved to In Progress
Updated by Yehuda Sadeh almost 8 years ago
- Status changed from In Progress to Resolved
Updated by Yehuda Sadeh almost 8 years ago
- Related to Bug #16143: rgw: concurrent sync on the same object added
Updated by Yehuda Sadeh almost 8 years ago
Created issue #16143 to track issue reported by Abhishek.
Updated by Ken Dreyer almost 8 years ago
- Status changed from Resolved to Pending Backport
- Backport set to jewel
This affects jewel and should be backported there.
Updated by Nathan Cutler almost 8 years ago
Ken Dreyer wrote:
This affects jewel and should be backported there.
The only fix I could find for this issue is https://github.com/ceph/ceph/8772 but that one is already slated for backport in #15565 (as noted in comment #4) - that's why I originally removed the jewel backport from this issue. If you agree, I'll change the status on this to "Duplicate".
Updated by Abhishek Lekshmanan almost 8 years ago
Ken Dreyer wrote:
This affects jewel and should be backported there.
As Nathan said, the original pr (#8772) is backported to jewel as https://github.com/ceph/ceph/pull/9017 which was tracked at http://tracker.ceph.com/issues/15802
Updated by Nathan Cutler almost 8 years ago
- Status changed from Pending Backport to Duplicate
- Backport deleted (
jewel)
Updated by Nathan Cutler almost 8 years ago
- Is duplicate of Bug #15625: rgw: segfault at RGWAsyncGetSystemObj added
Updated by Nathan Cutler almost 8 years ago
It appears the master PR in this case - https://github.com/ceph/ceph/pull/8772 - fixes a number of issues. I arbitrarily picked #15625 to be the "one issue to bind them all".