Project

General

Profile

Actions

Bug #50681

open

memstore: apparent memory leak when removing objects

Added by Sven Anderson almost 3 years ago. Updated almost 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Performance/Resource Usage
Target version:
-
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When I create and unlink big files like in this1 little program in my development environment, the OSD daemon keeps claiming more and more memory (using the memstore backend), eventually resulting in an OOM kill. If I limit the memory with "osd memory target" and disable the cache, it just blocks when the memory is used up. If I change to the filestore backend, the memory leak is gone. Although memstore is not meant for production use, it is an issue, when it is used for benchmarking other ceph-related code.

This is my ceph.conf:

[global]
fsid = $(uuidgen)
osd crush chooseleaf type = 0
run dir = ${DIR}/run
auth cluster required = none
auth service required = none
auth client required = none
osd pool default size = 1
mon host = ${HOSTNAME}

[mds.${MDS_NAME}]
host = ${HOSTNAME}

[mon.${MON_NAME}]
log file = ${LOG_DIR}/mon.log
chdir = "" 
mon cluster log file = ${LOG_DIR}/mon-cluster.log
mon data = ${MON_DATA}
mon data avail crit = 0
mon addr = ${HOSTNAME}
mon allow pool delete = true

[osd.0]
log file = ${LOG_DIR}/osd.log
chdir = "" 
osd data = ${OSD_DATA}
osd journal = ${OSD_DATA}.journal
osd journal size = 100
osd objectstore = memstore
osd class load list = *
osd class default list = *
osd_max_object_name_len = 256

[1] https://paste.ee/p/fUmYX


Files

ceph.tar.bz2 (224 KB) ceph.tar.bz2 ceph test cluster data Sven Anderson, 05/21/2021 03:07 PM
Actions

Also available in: Atom PDF