Project

General

Profile

Actions

Bug #54173

closed

User delete with purge data didn’t delete the data (multisite)

Added by Ist Gab about 2 years ago. Updated about 2 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
multisite
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

In my multisite setup I’ve deleted a user with radosgw-admin user rm —purge-data but the objects still in the cluster.
How I know that it's still there, it was a bucket with 1.7Billions objects under this user and the cluster usage is like this still:

data:
  pools:   7 pools, 2665 pgs
  objects: 1.77G objects, 67 TiB
  usage:   152 TiB used, 110 TiB / 262 TiB avail
  pgs:     2660 active+clean
           3    active+clean+scrubbing+deep
           2    active+clean+scrubbing

My setup is a 4 site multisite, the user has objects in the buckets on one of the secondary zone only!
I’ve initiated the user remove on the master zone, the user disappeared from radosgw-admin user list, the bucket disappeared from radosgw-admin bucket list, but the objects are still in the secondary zone cluster.

I’ve also tried radosgw-admin gc process on the secondary site, nothing happened.

Actions #1

Updated by Neha Ojha about 2 years ago

  • Project changed from Ceph to rgw
Actions #2

Updated by Casey Bodley about 2 years ago

does sync status show that sync has caught up with the deletes?

Actions #3

Updated by Casey Bodley about 2 years ago

  • Status changed from New to Need More Info
  • Tags set to multisite
Actions #4

Updated by Ist Gab about 2 years ago

Casey Bodley wrote:

does sync status show that sync has caught up with the deletes?

Seems like not, I don't understand why because the metadata is caught up.

radosgw-admin sync status
          realm 5fd28798-9195-44ac-b48d-ef3e95caee48 (comp)
      zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
           zone 9195d086-d75c-40c6-8102-6b7a112d1578 (sha)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 61c9d940-fde4-4bed-9389-edc8d7741817 (sin)
                        preparing for full sync
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
                source: 9213182a-14ba-48ad-bde9-289a1c0c0de8 (hkg)
                        preparing for full sync
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
                source: f20ddd64-924b-4f78-8d2d-dd6c65f98ba9 (ash)
                        preparing for full sync
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]

I suspect network issue but why the metadata caught up then?

What I've done:
1. From this SHA initiated radosgw-admin data-sync init --source-zone=ash
2. From this SHA initiated radosgw-admin data-sync init --source-zone=sin
3. From this SHA initiated radosgw-admin data-sync init --source-zone=hkg
4. Restarted the gateways in SHA

And in the radosgw-admin data sync status --source-zone=one_of_the_zone nothing in the total :/

"full_sync": {
    "total": 0,
    "complete": 0
}
Actions #5

Updated by J. Eric Ivancich about 2 years ago

  • Subject changed from User delete with purge data didn’t delete the data to User delete with purge data didn’t delete the data (multisite)
Actions #6

Updated by Casey Bodley about 2 years ago

  • Status changed from Need More Info to Closed
Actions

Also available in: Atom PDF