Project

General

Profile

Actions

Bug #48705

open

multisite: out of sync of the remove bucket operation

Added by Ilsoo Byun over 3 years ago. Updated over 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

If you remove and create a bucket without in-between delay, the removal operation is not synchronized on the slave.

You can reproduce this with the procedure below.

test-s3cmd.conf

[default]
access_key = D7JI5261GDFB6EG6M8HR
secret_key = Rfptd6fvs131JGvhwQ49qn8TTHtYecIsWlS9tKpl

host_base = 127.0.0.1:8001
host_bucket = 127.0.0.1:8001

use_https = False

test2-s3cmd.conf

[default]
access_key = D7JI5261GDFB6EG6M8HR2
secret_key = Rfptd6fvs131JGvhwQ49qn8TTHtYecIsWlS9tKpl2

host_base = 127.0.0.1:8001
host_bucket = 127.0.0.1:8001

use_https = False

test script

BUCKET_NAME=othertest
../src/mrun c1 radosgw-admin user create --uid test --display-name test --access-key D7JI5261GDFB6EG6M8HR --secret-key Rfptd6fvs131JGvhwQ49qn8TTHtYecIsWlS9tKpl
../src/mrun c1 radosgw-admin user create --uid test2 --display-name test2 --access-key D7JI5261GDFB6EG6M8HR2 --secret-key Rfptd6fvs131JGvhwQ49qn8TTHtYecIsWlS9tKpl2
sleep 5

s3cmd -c test-s3cmd mb s3://${BUCKET_NAME}
sleep 6
s3cmd -c test-s3cmd rb s3://${BUCKET_NAME}
s3cmd -c test2-s3cmd mb s3://${BUCKET_NAME}
sleep 1

../src/mrun c1 radosgw-admin bucket list --uid test
echo "--------------------------------------------------" 
../src/mrun c1 radosgw-admin bucket list --uid test2
echo "==================================================" 
../src/mrun c2 radosgw-admin bucket list --uid test
echo "--------------------------------------------------" 
../src/mrun c2 radosgw-admin bucket list --uid test2
echo "==================================================" 

the result is

2020-12-23T10:55:15.833+0900 7f15f53d3040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:15.848+0900 7f15f53d3040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:15.850+0900 7f15f53d3040 -1 WARNING: all dangerous and experimental features are enabled.
[]
--------------------------------------------------
2020-12-23T10:55:16.680+0900 7fce6e611040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:16.695+0900 7fce6e611040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:16.697+0900 7fce6e611040 -1 WARNING: all dangerous and experimental features are enabled.
[
    "othertest" 
]
==================================================
2020-12-23T10:55:17.544+0900 7fe1336fe040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:17.560+0900 7fe1336fe040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:17.561+0900 7fe1336fe040 -1 WARNING: all dangerous and experimental features are enabled.
[
    "othertest" 
]
--------------------------------------------------
2020-12-23T10:55:18.377+0900 7f6107ac2040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:18.393+0900 7f6107ac2040 -1 WARNING: all dangerous and experimental features are enabled.
2020-12-23T10:55:18.394+0900 7f6107ac2040 -1 WARNING: all dangerous and experimental features are enabled.
[
    "othertest" 
]
==================================================

On the slave, it shows that both users own the same bucket.

The reason is that slave requests metadata of the bucket to check if it is created or deleted.
In this case, the master left the log after deleting the market and create a bucket right away.
So the slave can't catch the moment when the bucket is deleted.

Actions #1

Updated by Casey Bodley over 3 years ago

it sounds like sync of the bucket entrypoint metadata needs to check whether the owner changed; and in that case, unlink from the previous owner, and link to the new

Actions

Also available in: Atom PDF