Project

General

Profile

Bug #40794

[RGW] Active bucket marker in stale instances list

Added by Aleksandr Rudenko 2 months ago. Updated about 1 month ago.

Status:
New
Priority:
Normal
Target version:
Start date:
07/16/2019
Due date:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

Hi,

I have luminous 12.2.12.

At 12.2.5 auto-resharding was enabled and turned off after few problems with it.
After update to 12.2.12 auto-resharding was enabled again. Now it works well as I can see.

Now I'm worried about stale instances.

For example:

I have bucket which was successfully sharded in past:

radosgw-admin bucket stats --bucket clx | grep marker

    "marker": "default.422998.196",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#",

And I can see this marker in stale instances list:

radosgw-admin reshard stale-instances list | grep clx

    "clx:default.422998.196",

As I know, stale-instances list must contain only previous marker ids.

If I run:

radosgw-admin reshard stale-instances rm

can it destroy my bucket?

I have a few buckets with this problem.

History

#1 Updated by Casey Bodley 2 months ago

  • Assignee set to Abhishek Lekshmanan

#2 Updated by Aleksandr Rudenko about 1 month ago

Hi Abhishek, can you help me with this problem?

I have permanent WARNING on production cluster about large OMAP for about 2 month. I need clean this warning. Please, help me.

Also available in: Atom PDF