Bug #22648
openrgw: secondary site's lc configuration erased by multisite sync
0%
Description
one can reproduce by steps:
1.set up multisite
2.create a bucket named bkt in secondary site
3.configure lc for bkt in secondary site
4.enable version for bkt in secondary site
after 4, bkt's lc configuration in step 3 will be erased.
Updated by fang yuxiang over 6 years ago
Updated by Matt Benjamin over 6 years ago
- Status changed from New to Fix Under Review
@Casey Bodley or @Orit Wasserman, could you look at this one?
Matt
Updated by Casey Bodley over 5 years ago
Updated by Gaudenz Steinlin almost 4 years ago
We have the same issue in an multi zonegroup configuration:
Ceph version: 13.2.8
Steps to reproduce:
1. create a bucket in the master zone of a slave zonegroup
2. Immediately afterwards set a lifecycle configuration on the bucket
3. The configuration is visible in a GET request on /?lifecycle
4. The configuration vanishes after about 1 minute
The log suggests that the configuration is overwritten by the metadata sync from the master zonegroup that happens after the bucket is created, but not immediately.
I would really appreciate if someone could look in to this because this behaviour is really weird and hard to explain to users. It's also annoying if users use automation tools to create buckets and set lifecycle policies as they have to introduce an arbitrary delay between these steps.
From a cursory look at the two PRs referenced above it looks like they could fix the issue.
Updated by Gaudenz Steinlin over 3 years ago
This issue is "fixed" in Octopus by commmit "d3fb699d6da8479a5e88207a9ae28a44122203b6", but forwarding to the metadata master does not really work. The request is refused by the master because the "Content-MD5" header is not forwarded by "forward_request_to_master". This header is required by RGW for a PUT request to set a lifecycle configuration.
Updated by Gaudenz Steinlin over 3 years ago
I created https://tracker.ceph.com/issues/47869 for the issue described above.