Bug #23255
radosgw record wrong data logs when create and delete bucket in seconds repeatedly
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
Ceph - v0.51, v0.52a, v0.53a, v0.53b, v0.54a, v0.54b, v0.55a, v0.56, v0.57a, v0.57b, v0.58, v0.59, v0.60, v0.61 - Cuttlefish, v0.62a, v0.62b, v0.63, v0.64, v0.65, v0.66, v0.67 - Dumpling, v0.67rc, v0.67rc-continued, v0.68, v0.68 - continued, v0.69, v0.70, v0.71-rc1, v0.72 Emperor, v0.73, v0.74, v0.75, v0.76, v0.77, v0.78, sprint, sprint2, sprint3, v0.80rc, sprint4, v0.81, v0.82, v0.83, v0.84, v0.85, v0.86, v0.87, v0.88, v0.89, v0.90, v0.91, v0.92, v0.93 - Last Hammer Sprint, v9.0.2, v9.0.3, v9.0.4, v9.1.0, 10.0.2, 10.0.1, 10.0.3, Ceph - v10.0.4, 0.95, Ceph - v0.80.10, Ceph - v0.80.11, Ceph - v0.80.12, v0.94.10, Ceph - v0.94.10, Ceph - v0.94.11, Ceph - v0.94.2, Ceph - v0.94.3, Ceph - v0.94.4, Ceph - v0.94.5, Ceph - v0.94.6, Ceph - v0.94.7, Ceph - v0.94.8, Ceph - v0.94.9, Ceph - v10.0.0, Ceph - v10.1.1, Ceph - v10.2.0, Ceph - v10.2.1, Ceph - v10.2.10, Ceph - v10.2.11, Ceph - v10.2.2, Ceph - v10.2.3, Ceph - v10.2.4, Ceph - v10.2.5, Ceph - v10.2.6, Ceph - v10.2.7, Ceph - v10.2.8, Ceph - v10.2.9, Ceph - v11.1.0, Ceph - v11.2.0, Ceph - v11.2.1, Ceph - v11.2.2, Ceph - v12.0.0, Ceph - v12.1.0, Ceph - v12.2.0, Ceph - v12.2.1, Ceph - v12.2.2, Ceph - v12.2.3, Ceph - v12.2.4, Ceph - v12.2.5, Ceph - v13.0.0, Ceph - v9.1.1, Ceph - v9.2.1, Ceph - v9.2.2
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
first, enable data log and meta log for multisite.
second, create a bucket and an object, then delete the object and the bucket.repeat this process for a few times.
third, get the data log from rgw. the data log maybe contains the first created bucket's id, but no the last created bucket id.
when multisite is configured, the synchronization will be wrong between zones. radosgw zone rely on the correct data log.
History
#1 Updated by Yehuda Sadeh about 6 years ago
- Assignee set to Yehuda Sadeh