Project

General

Profile

Actions

Bug #53472

open

Active OSD processes do not see reduced memory target when adding more OSDs

Added by Stuart Grace over 2 years ago. Updated 4 days ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-ansible
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When you install more OSDs onto a node, the "osd memory target" value in ceph.conf is reduced so the total memory usage remains at 70% (using default value for safety factor). But OSD processes that are already running do not re-read ceph.conf and continue to use the old target value. This can lead to out-of-memory errors.

Example: A node has 7 OSDs so each gets a target of 10% of total memory, leaving 30% free for safety factor.
Now I add 7 more OSDs and the target is reduced to 5%. But the original 7 OSDs continue to use 10% memory each + 7 new ones using 5% = 105% memory in use!

Is it possible to notify the running OSD processes to re-read ceph.conf when it changes?

Actions #1

Updated by Patrick Donnelly 4 days ago

  • Project changed from 31 to RADOS
Actions #2

Updated by Radoslaw Zarzynski 4 days ago

This tracker is 2 years old. I'm not sure how the situation was back then but, at least now, BlueStore is observing these configurables:

const char **BlueStore::get_tracked_conf_keys() const
{
  static const char* KEYS[] = {
...
    "osd_memory_target",
    "osd_memory_target_cgroup_limit_ratio",

Also, nowadays people tend to use config mon's config DB instead of ceph.conf.

Actions #3

Updated by Radoslaw Zarzynski 4 days ago

  • Status changed from New to Need More Info

Pacific is EOL. Does it replicate on newer releases?

Actions

Also available in: Atom PDF