Project

General

Profile

Support #24457

large omap object

Added by Stephan Schultchen almost 6 years ago. Updated over 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

i have a ceph cluster that is exclusively serving radosgw/s3.

it only has one bucket with many objects in it

after a while it starts reporting '6 large omap objects'.

information on this issue is pretty limited, and i was not able to find a solution to this.

any suggestions to get rid of this issue would be appreciated.

History

#1 Updated by Matt Benjamin almost 6 years ago

How many shards does the bucket index have currently?

Matt

#2 Updated by Stephan Schultchen almost 6 years ago

how can i get this information?

#3 Updated by Stephan Schultchen almost 6 years ago

Stephan Schultchen wrote:

how can i get this information?

google helped (https://arvimal.blog/2016/06/30/sharding-the-ceph-rados-gateway-bucket-index/)

currently i have: "num_shards": 349

the command "radosgw-admin reshard list" shows an resharding ongoing for my bucket:

"old_num_shards": 1,
"new_num_shards": 2

so will the error stage go away by itself, and i just have to wait?

this currently is a test cluster, so i could simply recreate the bucket, and reimport all data.

is there a way to create a bucket with a high number of shards?

#4 Updated by Greg Farnum almost 6 years ago

  • Project changed from Ceph to rgw

#5 Updated by Stephan Schultchen almost 6 years ago

i tried a manual resharding

using this command: radosgw-admin bucket reshard --bucket bucket_name --num-shards 512

but i got this error in the end:
  • NOTICE: operation will not remove old bucket index objects ***
  • these will need to be removed manually ***
    tenant:
    bucket name: bucket_name
    old bucket instance id: 6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.184670.1
    new bucket instance id: 6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.176197.1
    WARNING: RGWReshard::add failed to drop lock on bucket_name:6f85d718-fd2e-4c1b-a21d-bafb04a8cfcc.184670.1 ret=-2

#6 Updated by Stephan Schultchen almost 6 years ago

i now have "11 large omap objects" and no clue what to do about it

#7 Updated by Lei Liu over 5 years ago

Any suggestions ?

The same as me after manual resharding

#8 Updated by hoan nv over 5 years ago

I have same issue.

#9 Updated by jack jack over 5 years ago

I have same issue ,I remove a large file used s3cmd and I got this issue but I don't know how solution to this .
Any suggestions ?

#10 Updated by Will Marley over 5 years ago

jack jack wrote:

I have same issue ,I remove a large file used s3cmd and I got this issue but I don't know how solution to this .
Any suggestions ?

Hi,

We're currently facing this issue as well, and we can't seem to get much attention towards this. Sage's suggestion would be to increase the warn thresholds in order to surpress the issue. We're not confident doing this, as we should really exceed the default values anyway. We're waiting to find out more from this bug report before doing this, however we have to leave our cluster in a health_warn state in order to do this.

I believe the parameters that he was referring to are the following; (pulled from ceph/src/common/options.cc)

Option("osd_deep_scrub_large_omap_object_key_threshold", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
.set_default(2000000)
.set_description("Warn when we encounter an object with more omap keys than this")
.add_service("osd")
.add_see_also("osd_deep_scrub_large_omap_object_value_sum_threshold"),
Option("osd_deep_scrub_large_omap_object_value_sum_threshold", Option::TYPE_SIZE, Option::LEVEL_ADVANCED)
.set_default(1_G)
.set_description("Warn when we encounter an object with more omap key bytes than this")
.add_service("osd")
.add_see_also("osd_deep_scrub_large_omap_object_key_threshold"),

Please update this issue if you can find anything that may help us, as from the looks of things, there are a few people facing this issue at the minute.

Kind Regards,
Will

#11 Updated by Will Marley over 5 years ago

as we should really exceed the default values anyway.

Shouldn't*

#12 Updated by Enrico Kern over 5 years ago

We face the same issue in 13.2.1. We have a bucket of around 60TB in size and it doesnt even sync anymore since luminous already. Now since upgrade to mimic it shows "1 large omap objects" in the rgw index pool. dynamic resharding is disabled. I also could not find much information on how todo sharding in an multisite environment. Can i manual reshard the bucket? If so do i need todo it only on the master zone or is it independent ?

bucket shows num_shards: 0

"num_shards": 0,
"bi_shard_hash_type": 0,
"requester_pays": "false",
"has_website": "false",
"swift_versioning": "false",
"swift_ver_location": "",
"index_type": 0,
"mdsearch_config": [],
"reshard_status": 0,

sync status shows recovering shards: [84] but its doing that since a few weeks now without any progress at all.

#13 Updated by jack jack over 5 years ago

Will Marley wrote:

jack jack wrote:

I have same issue ,I remove a large file used s3cmd and I got this issue but I don't know how solution to this .
Any suggestions ?

Hi,

We're currently facing this issue as well, and we can't seem to get much attention towards this. Sage's suggestion would be to increase the warn thresholds in order to surpress the issue. We're not confident doing this, as we should really exceed the default values anyway. We're waiting to find out more from this bug report before doing this, however we have to leave our cluster in a health_warn state in order to do this.

I believe the parameters that he was referring to are the following; (pulled from ceph/src/common/options.cc)

Option("osd_deep_scrub_large_omap_object_key_threshold", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
.set_default(2000000)
.set_description("Warn when we encounter an object with more omap keys than this")
.add_service("osd")
.add_see_also("osd_deep_scrub_large_omap_object_value_sum_threshold"),

Option("osd_deep_scrub_large_omap_object_value_sum_threshold", Option::TYPE_SIZE, Option::LEVEL_ADVANCED)
.set_default(1_G)
.set_description("Warn when we encounter an object with more omap key bytes than this")
.add_service("osd")
.add_see_also("osd_deep_scrub_large_omap_object_key_threshold"),

Please update this issue if you can find anything that may help us, as from the looks of things, there are a few people facing this issue at the minute.

Kind Regards,
Will

HI,will
Someone tell me omap save about three type data.
1.bucket index
2.gc list
3.multisite log
I remove a large file ,I guess gc list cause this promble but I saw about radosgw-admin gc list --include-all is null.

if you don't want to Multiple Realms you should close Metadata and data log otherwise it will create a lot of log and get large omap object
health_warn.
before you get this warning you change anythink eles. would you tell me?

my english not well . but this is all my known.

Kind Regards,
jack

#14 Updated by Jacek S. over 5 years ago

I managed to get rid of this message by following a process:
  • Enable dynamic sharding - there are many tutorials available, you have to enable sharding in used zone
  • Find out an affected object and confirm that it's no longer 'too big' - you can do that by checking via rados command
  • Locate pg group where this obejct was stored (as in log message there is an object name, not pg group) -> you can query each group and look into json under path .info.stats.stat_sum.num_large_omap_objects
  • And finally schedule deep-scrub of that group once again - because that information was submitted to monitor after successful deep-scrub on one of OSDs

Also available in: Atom PDF