Actions
Bug #63933
openosd_memory_target_autotune not working properly
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I have a cluster ceph running on 17.2.6 version.
Some config on my cluster.
ceph config get osd osd_memory_target_autotune
true
ceph config get mgr mgr/cephadm/autotune_memory_target_ratio
0.700000
ceph config get osd.0 osd_memory_target
26497139538 ~ 24GB
But actually on my server , no osd can use more than 4GB memory
ps -ef | grep osd.0
167 2967139 2967127 0 2023 ? 1-00:05:24 /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false
top -p 2967139
2967139 167 20 0 5504284 3.7g 33960 S 0.3 0.7 1445:24 ceph-osd
podman exec -it <osd-0-id> /bin/bash
ceph daemon /var/run/ceph/ceph-osd.0.asok config show | grep osd_memory_target
"osd_memory_target": "4294967296", "osd_memory_target_autotune": "true", "osd_memory_target_cgroup_limit_ratio": "0.800000",
What woring on my config?
How i can workarround? Disable autotune and set manually ?
Thanks.
Updated by hoan nv 4 months ago
I tried manually set osd_memory_target :
ceph config set osd.0 osd_memory_target_autotune false ceph config set osd.0 osd_memory_target 12G
then show top -p 2967139
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2967139 167 20 0 8274924 7.2g 34836 S 0.0 1.4 1452:49 ceph-osd
Actions