Bug #57036
closedrgw/multisite: bucket data sync error
0%
Description
radosgw-admin sync error list
[ { "shard_id": 0, "entries": [ { "id": "1_1659509025.482819_820.1", "section": "data", "name": "registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:1071", "timestamp": "2022-08-03 06:43:45.482819Z", "info": { "source_zone": "685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, { "id": "1_1659509025.550206_821.1", "section": "data", "name": "registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:558", "timestamp": "2022-08-03 06:43:45.550206Z", "info": { "source_zone": "685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, { "id": "1_1659509025.575432_822.1", "section": "data", "name": "registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:1543", "timestamp": "2022-08-03 06:43:45.575432Z", "info": { "source_zone": "685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, { "id": "1_1659509025.627023_823.1", "section": "data", "name": "registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:430", "timestamp": "2022-08-03 06:43:45.627023Z", "info": { "source_zone": "685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } }, { "id": "1_1659509025.670478_825.1", "section": "data", "name": "registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:1699", "timestamp": "2022-08-03 06:43:45.670478Z", "info": { "source_zone": "685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7", "error_code": 125, "message": "failed to sync bucket instance: (125) Operation canceled" } },
master bucket object and secondary bucket object are inconsistent:
master bucket stats:
radosgw-admin bucket stats --bucket=registry
"usage": { "rgw.main": { "size": 4854404414619, "size_actual": 4855315521536, "size_utilized": 4854404414619, "size_kb": 4740629312, "size_kb_actual": 4741519064, "size_kb_utilized": 4740629312, "num_objects": 270667 },
secondary bucket stats:
radosgw-admin bucket stats --bucket=registry
"usage": { "rgw.main": { "size": 4458779168333, "size_actual": 4459641950208, "size_utilized": 4458779168333, "size_kb": 4354276532, "size_kb_actual": 4355119092, "size_kb_utilized": 4354276532, "num_objects": 255109 } },
ceph version: 14.2.22
Updated by Casey Bodley over 1 year ago
- Status changed from New to Won't Fix - EOL
ceph nautilus has been end-of-life, and won't get any bug fixes. can you try reproducing on a supported ceph version?
Updated by yite gu over 1 year ago
Casey Bodley wrote:
ceph nautilus has been end-of-life, and won't get any bug fixes. can you try reproducing on a supported ceph version?
I have no plan that use high version at now, if you won't fixes, I hope you could tell me some reason why cause to this problem.
Have a error log in client rgw log file:
2022-08-06 03:18:25.844 7fb20c671700 0 RGW-SYNC:data:sync:shard[20]:entry[registry:685bf2ab-8fc3-4d98-90fe-aa4851ff0aa7.30781.1:123]: ERROR: failed to log sync failure in error repo: retcode=0
Updated by yite gu over 1 year ago
yite gu wrote:
Casey Bodley wrote:
ceph nautilus has been end-of-life, and won't get any bug fixes. can you try reproducing on a supported ceph version?
I have no plan that use high version at now, if you won't fixes, I hope you could tell me some reason why cause to this problem.
Have a error log in client rgw log file:
[...]
2022-08-06 15:09:57.733 7fb20c671700 0 meta sync: ERROR: RGWBackoffControlCR called coroutine returned -125