Project

General

Profile

Bug #57489

rgw: Sync doesn't start automatically if (disabled &) enabled using sync-policy

Added by Soumya Koduri over 1 year ago. Updated 8 months ago.

Status:
In Progress
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

While using bucket sync policy (https://docs.ceph.com/en/latest/radosgw/multisite-sync-policy/), if the sync was (disabled &) enabled for any bucket(s) with pre-existing objects, sync doesn't start automatically. This behaviour is different when compared to using "bucket sync enable/disable" commands.

For eg.,

1) Bucket on Primary zone is already having objects and then sync is enabled on that bucket as below -

2) zonegroup level -> Group & Flow & Pipe configured (Allowed)
Bucket level -> Group & Pipe configured (Enabled)

3) "sync status" shows caught up on primary.

/]# radosgw-admin bucket sync status --bucket=buck2
realm 4ede2ea6-d85f-40e1-80e4-5c3d83d04470 (india)
zonegroup 30da6f7f-3b74-4b38-9abf-62cd7e4a776d (shared)
zone 91dc7508-828e-485e-9423-94f141658785 (primary)
bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
current time 2022-08-24T14:58:47Z

source zone f0b4f23f-016b-44d0-937c-6fc4a733a3d3 (secondary)
source bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
incremental sync on 11 shards
bucket is caught up with source

4) but it shows shards behind on secondary

/]# radosgw-admin bucket sync status --bucket buck2
realm 4ede2ea6-d85f-40e1-80e4-5c3d83d04470 (india)
zonegroup 30da6f7f-3b74-4b38-9abf-62cd7e4a776d (shared)
zone f0b4f23f-016b-44d0-937c-6fc4a733a3d3 (secondary)
bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
current time 2022-08-24T15:23:40Z

source zone 91dc7508-828e-485e-9423-94f141658785 (primary)
source bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
incremental sync on 11 shards
bucket is behind on 11 shards
behind shards: [0,1,2,3,4,5,6,7,8,9,10]

/]# radosgw-admin sync status
realm 4ede2ea6-d85f-40e1-80e4-5c3d83d04470 (india)
zonegroup 30da6f7f-3b74-4b38-9abf-62cd7e4a776d (shared)
zone f0b4f23f-016b-44d0-937c-6fc4a733a3d3 (secondary)
current time 2022-08-24T15:23:49Z
zonegroup features enabled:
disabled: resharding
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 91dc7508-828e-485e-9423-94f141658785 (primary)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 11 shards
behind shards: [114,115,116,117,118,119,120,121,122,123,124]

5) On writing a new object on this bucket, the entire bucket is synced to secondary.

/]# radosgw-admin bucket sync status --bucket buck2
realm 4ede2ea6-d85f-40e1-80e4-5c3d83d04470 (india)
zonegroup 30da6f7f-3b74-4b38-9abf-62cd7e4a776d (shared)
zone f0b4f23f-016b-44d0-937c-6fc4a733a3d3 (secondary)
bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
current time 2022-08-24T15:50:09Z

source zone 91dc7508-828e-485e-9423-94f141658785 (primary)
source bucket :buck2[91dc7508-828e-485e-9423-94f141658785.35217.1])
incremental sync on 11 shards
bucket is caught up with source

History

#1 Updated by Soumya Koduri over 1 year ago

Few notes from Casey -

  • But in case of "sync policy"
    - changes to sync policy don't actually trigger sync
    - and if we stop writing bilogs for a bucket because its sync policy is 'disabled', enabling it again later wouldn't do a full sync to catch those unlogged changes
    - these are design-level consistency issues that we'll need to work on eventually

#2 Updated by Radoslaw Zarzynski over 1 year ago

  • Project changed from RADOS to rgw

#3 Updated by Soumya Koduri 10 months ago

  • Status changed from New to In Progress
  • Assignee set to Soumya Koduri

#4 Updated by Shilpa MJ 8 months ago

  • Tags set to multisite multisite-backlog

Also available in: Atom PDF