Project

General

Profile

Actions

Bug #51487

closed

Sync stopped from primary to secondary post reshard

Added by Tejas C almost 3 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
multisite-reshard
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 17.0.0-5278-g79eb0c85 (79eb0c853ca1ee491410e0c6c6796675a7449ef9) quincy (dev)

Steps:
- Create a new bucket , and do a reshard on the empty bucket from secondary to 23 shards.
- Write 2k objects from primary , and sync is successful.
- Do a reshard again on secondary to 47 shards.
- Write 2k objects from primary again, objects not synced.

Primary:
/]# radosgw-admin bucket stats --bucket buck2 {
"bucket": "buck2",
"num_shards": 11,
"tenant": "",
"zonegroup": "1785a4fa-f7d6-4081-8c72-74f9cc441d3a",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1",
"marker": "b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1",
"index_type": "Normal",
"owner": "test1",
"ver": "0#362,1#374,2#371,3#364,4#376,5#370,6#354,7#355,8#372,9#356,10#357",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
"mtime": "0.000000",
"creation_time": "2021-07-02T11:21:05.001106Z",
"max_marker": "0#00000000361.370523.5,1#00000000373.988.5,2#00000000370.751.5,3#00000000363.6267.5,4#00000000375.19183.5,5#00000000369.749.5,6#00000000353.938.5,7#00000000354.719.5,8#00000000371.753.5,9#00000000355.1205.5,10#00000000356.723.5",
"usage": {
"rgw.main": {
"size": 46256000,
"size_actual": 49152000,
"size_utilized": 46256000,
"size_kb": 45172,
"size_kb_actual": 48000,
"size_kb_utilized": 45172,
"num_objects": 4000
}
},

Secondary:

/]# radosgw-admin bucket stats --bucket buck2 {
"bucket": "buck2",
"num_shards": 47,
"tenant": "",
"zonegroup": "1785a4fa-f7d6-4081-8c72-74f9cc441d3a",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1",
"marker": "b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1",
"index_type": "Normal",
"owner": "test1",
"ver": "0#92,1#79,2#84,3#94,4#76,5#94,6#94,7#81,8#97,9#87,10#79,11#101,12#88,13#88,14#97,15#81,16#91,17#95,18#87,19#88,20#99,21#87,22#87",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0",
"mtime": "0.000000",
"creation_time": "2021-07-02T11:21:05.001106Z",
"max_marker": "0#00000000090.22952.5,1#00000000077.31937.5,2#00000000082.175.5,3#00000000092.261.5,4#00000000074.412.5,5#00000000092.195.5,6#00000000092.195.5,7#00000000079.237.5,8#00000000095.201.5,9#00000000085.181.5,10#00000000077.165.5,11#00000000099.209.5,12#00000000086.2998.5,13#00000000086.257.5,14#00000000095.267.5,15#00000000079.2989.5,16#00000000089.189.5,17#00000000093.410.5,18#00000000085.181.5,19#00000000086.183.5,20#00000000097.205.5,21#00000000085.181.5,22#00000000085.192.5",
"usage": {
"rgw.main": {
"size": 23128000,
"size_actual": 24576000,
"size_utilized": 23128000,
"size_kb": 22586,
"size_kb_actual": 24000,
"size_kb_utilized": 22586,
"num_objects": 2000
}

/]# radosgw-admin bucket sync checkpoint --bucket buck2
2021-07-02T12:46:44.448+0000 7f561e900340 1 bucket sync caught up with source:
local status: [00000000361.370523.5, 00000000373.988.5, 00000000370.751.5, 00000000363.6267.5, 00000000375.19183.5, 00000000369.749.5, 00000000353.938.5, 00000000354.719.5, 00000000371.753.5, 00000000355.1205.5, 00000000356.723.5, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]
remote markers: [00000000361.370523.5, 00000000373.988.5, 00000000370.751.5, 00000000363.6267.5, 00000000375.19183.5, 00000000369.749.5, 00000000353.938.5, 00000000354.719.5, 00000000371.753.5, 00000000355.1205.5, 00000000356.723.5]
2021-07-02T12:46:44.448+0000 7f561e900340 0 bucket checkpoint complete
/]# radosgw-admin bucket sync status --bucket buck2
realm 6d34d4f8-671c-4cf0-a6b7-48e1fa21fde7 (india)
zonegroup 1785a4fa-f7d6-4081-8c72-74f9cc441d3a (south)
zone b6caae11-7796-4265-8133-374bf1992f5a (pune)
bucket :buck2[b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1])
source zone b76458b4-55bc-41b4-8610-b4aa5df49661 (blr)
source bucket :buck2[b76458b4-55bc-41b4-8610-b4aa5df49661.36014.1])
incremental sync on 47 shards
bucket is caught up with source
Actions #1

Updated by Casey Bodley almost 3 years ago

  • Tags set to multisite-reshard
Actions #2

Updated by Vidushi Mishra over 2 years ago

1. We tested the above scenario by resharding the bucket from the primary and writing objects from the secondary. We do not see the issue on ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e95809e32e8ddf109d416320c9) quincy (dev).

2. We would be testing the scenario by resharding the bucket from the secondary and writing from the primary and verify.

Actions #3

Updated by Vidushi Mishra over 2 years ago

We are not seeing the issue on resharding the bucket twice from the secondary and writing from the primary and verify.
ceph version 17.0.0-8051-g15b54dc9 (15b54dc9eaa835e95809e32e8ddf109d416320c9) quincy (dev)

Actions #4

Updated by Casey Bodley over 2 years ago

  • Status changed from New to Resolved

thanks for verifying!

Actions

Also available in: Atom PDF