Actions
Bug #40583
closedLower the default value of osd_deep_scrub_large_omap_object_key_threshold
% Done:
0%
Source:
Tags:
Backport:
luminous,mimic,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
The current default of 2million k/v pairs is too high. Recovery takes too long
for bucket index objects with this much omap data in particular, which blocks
access to client buckets until it completes.
Lower the default for osd_deep_scrub_large_omap_object_key_threshold so such
objects can be detected before they become a problem
Updated by Neha Ojha almost 5 years ago
- Status changed from New to Fix Under Review
- Assignee set to Neha Ojha
- Pull request ID set to 28782
Updated by Sage Weil almost 5 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Nathan Cutler almost 5 years ago
- Copied to Backport #40653: luminous: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold added
Updated by Nathan Cutler almost 5 years ago
- Copied to Backport #40654: mimic: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold added
Updated by Nathan Cutler almost 5 years ago
- Copied to Backport #40655: nautilus: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold added
Updated by Nathan Cutler over 4 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved".
Updated by Patrick Donnelly over 4 years ago
- Related to Bug #42515: fs: OpenFileTable object shards have too many k/v pairs added
Updated by Florian Haas over 4 years ago
I am taking the liberty to add a couple of recent mailing list threads here that highlight a potentially unintended consequence of this change, all related to the radosgw usage log, in the hope that it'll make it easier for people to make the connection:
- "Large omap objects in radosgw .usage pool: is there a way to reshard the rgw usage log?" https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AQNGVY7VJ3K6ZGRSTX3E5XIY7DBNPDHW/
- "Bogus Entries in RGW Usage Log / Large omap object in rgw.log pool" https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2QNKWK642LWCNCJEB5THFGMSLR37FLX7/
- "default.rgw.log contains large omap object" https://www.mail-archive.com/ceph-users@lists.ceph.com/msg56611.html
Actions