Project

General

Profile

Actions

Bug #45201

open

multisite: buckets deleted on secondary remain on master

Added by Michael B about 4 years ago. Updated about 4 years ago.

Status:
New
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
multisite
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Testing multisite with one secondary, the following is as expected:

create bucket on primary; bucket appears on secondary
create bucket on secondary; bucket appears on primary
delete bucket on primary; bucket disappears on secondary

BUT

delete bucket on secondary; bucket remains on primary

It doesn't matter whether the bucket being deleted was originally
created on the primary or the secondary.

This is not what I expect with Active-Active replication.

This could result in substantial confusion or perhaps data loss.
(Q. What happens if a bucket of the same name as the deleted bucket
is created on the secondary? A. It allows it to be created.
Does this empty the contents of the bucket that remains on the primary?
I haven't checked.)

Is it a known limitation of the implementation? If so, where is
it described in the docs?

It should have nothing to do with the "sync policy" feature
as I haven't defined one:
radosgw-admin sync policy get
ERROR: failed to get policy: (22) Invalid argument

(I don't find radosgw-admin's error messages very helpful.)

The output of sync status on both sides indicates no problem.

radosgw-admin sync status
realm 114f1a32-cfbb-4531-94fb-1e0e3106dc03 (_b076afbb-8824-470d-aaaf-2ec1cb3b3eab)
zonegroup b076afbb-8824-470d-aaaf-2ec1cb3b3eab (_298a10d5-6785-4bab-ac68-c3d1f371a771)
zone 298a10d5-6785-4bab-ac68-c3d1f371a771 (store-mb1)
metadata sync no sync (zone is master)
data sync source: af88fd26-bdb3-4efe-8338-e96a26377922 (store-mb2)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source

radosgw-admin sync status
realm 114f1a32-cfbb-4531-94fb-1e0e3106dc03 (_b076afbb-8824-470d-aaaf-2ec1cb3b3eab)
zonegroup b076afbb-8824-470d-aaaf-2ec1cb3b3eab (_298a10d5-6785-4bab-ac68-c3d1f371a771)
zone af88fd26-bdb3-4efe-8338-e96a26377922 (store-mb2)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 298a10d5-6785-4bab-ac68-c3d1f371a771 (store-mb1)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source

Actions

Also available in: Atom PDF