Project

General

Profile

Actions

Bug #21417

closed

buffer_anon leak during deep scrub (on otherwise idle osd)

Added by Sage Weil over 6 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

observed gobs of ram (11gb rss) and most of it buffer_anon (~8gb) on a basically idle cluster with replication, ec, and cache tiering. 12.2.0


Related issues 1 (0 open1 closed)

Copied to RADOS - Backport #21650: luminous: buffer_anon leak during deep scrub (on otherwise idle osd)ResolvedSage WeilActions
Actions #1

Updated by Sage Weil over 6 years ago

  • Subject changed from buffer_anon leak on mostly-idle osd to buffer_anon leak during deep scrub (on otherwise idle osd)

definitely happens from an ec pool.

Actions #2

Updated by Sage Weil over 6 years ago

  • Priority changed from Urgent to Immediate
Actions #3

Updated by Sage Weil over 6 years ago

  • Status changed from Need More Info to Fix Under Review
  • Backport set to luminous
Actions #4

Updated by Sage Weil over 6 years ago

ok, the problem is that as scrub (or whatever) happens, the bluestore cache is populated, but the attrs weren't in the right mempool, which mean we bloated the overall process footprint.

Actions #5

Updated by Sage Weil over 6 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #6

Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #21650: luminous: buffer_anon leak during deep scrub (on otherwise idle osd) added
Actions #7

Updated by Sage Weil over 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF