Project

General

Profile

Actions

Bug #51317

closed

Objects not synced if reshard is done while sync is happening in Multisite

Added by Tejas C almost 3 years ago. Updated over 2 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
multisite-reshard
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Steps:
- Start upload of 100 objects on primary
- Simultaneously start bucket reshard on secondary, while sync is in progress
- only 75 objects visible on secondary , and sync status indicates complete. No sync errors listed also.

]# ceph -v
ceph version 17.0.0-5278-g79eb0c85 (79eb0c853ca1ee491410e0c6c6796675a7449ef9) quincy (dev)

primary :
]# radosgw-admin bucket stats --bucket buck1 --cluster c1 --debug_rgw=0
2021-06-22T11:12:51.174+0000 7f1e7ca85340 0 set uid:gid to 167:167 (ceph:ceph) {
"bucket": "buck1",
"num_shards": 23,
"tenant": "",
"zonegroup": "a",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1",
"marker": "36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1",
"index_type": "Normal",
"owner": "test1",
"ver": "0#2,1#5,2#5,3#7,4#4,5#2,6#5,7#2,8#4,9#4,10#6,11#4,12#3,13#4,14#2,15#2,16#5,17#6,18#6,19#4,20#3,21#4,22#3",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0",
"mtime": "0.000000",
"creation_time": "2021-06-22T03:04:11.977789Z",
"max_marker": "0#,1#00000000004.137.5,2#00000000004.9.5,3#00000000006.11.5,4#00000000003.6.5,5#,6#00000000004.141.5,7#,8#00000000003.8.5,9#00000000003.7.5,10#00000000005.219.5,11#00000000003.6.5,12#00000000002.3.5,13#00000000003.85.5,14#,15#,16#00000000004.15.5,17#00000000005.12.5,18#00000000005.15.5,19#00000000003.12.5,20#00000000002.3.5,21#00000000003.5.5,22#00000000002.125.5",
"usage": {
"rgw.main": {
"size": 1153433600,
"size_actual": 1153433600,
"size_utilized": 1154416617,
"size_kb": 1126400,
"size_kb_actual": 1126400,
"size_kb_utilized": 1127360,
"num_objects": 100

secondary :
]# radosgw-admin bucket reshard --bucket buck1 --num-shards=23 --cluster c2 --debug_rgw=0 --yes-i-really-mean-it
2021-06-22T05:47:02.637+0000 7fc4ba4d3340 0 set uid:gid to 167:167 (ceph:ceph)
tenant:
bucket name: buck1
total entries: 50

#wait for 2 hours

]# radosgw-admin bucket stats --bucket buck1 --cluster c2 --debug_rgw=0
2021-06-22T11:12:43.381+0000 7f26577fc340 0 set uid:gid to 167:167 (ceph:ceph) {
"bucket": "buck1",
"num_shards": 23,
"tenant": "",
"zonegroup": "a",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1",
"marker": "36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1",
"index_type": "Normal",
"owner": "test1",
"ver": "0#2,1#2,2#2,3#2,4#1,5#2,6#2,7#2,8#2,9#2,10#2,11#4,12#3,13#4,14#2,15#2,16#5,17#6,18#6,19#4,20#3,21#4,22#3",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0",
"mtime": "0.000000",
"creation_time": "2021-06-22T03:04:11.977789Z",
"max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#00000000003.6.5,12#00000000002.3.5,13#00000000003.31.5,14#,15#,16#00000000004.9.5,17#00000000005.12.5,18#00000000005.11.5,19#00000000003.8.5,20#00000000002.3.5,21#00000000003.5.5,22#00000000002.39.5",
"usage": {
"rgw.main": {
"size": 1101004800,
"size_actual": 1101004800,
"size_utilized": 1101883684,
"size_kb": 1075200,
"size_kb_actual": 1075200,
"size_kb_utilized": 1076059,
"num_objects": 75

]# radosgw-admin sync status --cluster c2 --debug_rgw=0
2021-06-22T11:35:40.102+0000 7f7a53928340 0 set uid:gid to 167:167 (ceph:ceph)
realm f3e66994-fb1e-420a-942b-c5e9560ae1aa (test-realm)
zonegroup a (a)
zone e0b2884d-4b43-4ef7-9ef9-092b46810955 (a2)
metadata sync syncing
full sync: 0/4 shards
incremental sync: 4/4 shards
metadata is caught up with master
data sync source: 36d845ef-04c5-4368-9141-a2c0a1a87221 (a1)
syncing
full sync: 0/4 shards
incremental sync: 4/4 shards
data is caught up with source

  1. radosgw-admin bucket sync status --cluster c2 --debug_rgw=0 --bucket buck1
    2021-06-22T11:35:49.805+0000 7f8480359340 0 set uid:gid to 167:167 (ceph:ceph)
    realm f3e66994-fb1e-420a-942b-c5e9560ae1aa (test-realm)
    zonegroup a (a)
    zone e0b2884d-4b43-4ef7-9ef9-092b46810955 (a2)
    bucket :buck1[36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1])

    source zone 36d845ef-04c5-4368-9141-a2c0a1a87221 (a1)
    source bucket :buck1[36d845ef-04c5-4368-9141-a2c0a1a87221.4280.1])
    incremental sync on 23 shards
    bucket is caught up with source

Actions #1

Updated by Casey Bodley almost 3 years ago

  • Tags set to multisite-reshard
Actions #2

Updated by Casey Bodley over 2 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF