Project

General

Profile

Actions

Bug #55761

closed

ceph osd bluestore memory leak

Added by dovefi Z almost 2 years ago. Updated almost 2 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
osd
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

hi, everyone, we have a ceph cluster and we only user rgw with EC Pool, now the cluster osd memory keeps growing to more than 16gb

  • ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
  • osd config
    ceph daemon osd.374 config get osd_memory_target
    {
        "osd_memory_target": "4294967296" 
    }
    

osd memory used


    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
2158658 CEPH      20   0 17.023G 0.016T  10700 S   5.6 12.9  97:37.84 CEPH-OSD
2167500 CEPH      20   0 16.503G 0.015T  17392 S   1.7 12.1 103:05.32 CEPH-OSD
2164715 CEPH      20   0 14.230G 0.013T  19904 S   1.3 10.6 101:39.31 CEPH-OSD
2163249 CEPH      20   0 13.249G 0.012T  20068 S   1.3  9.9  94:41.75 CEPH-OSD
2160621 CEPH      20   0 12.728G 0.012T   9256 S   2.3  9.4 103:20.76 CEPH-OSD
2158042 CEPH      20   0 12.748G 0.011T  22344 S   1.0  9.3  88:23.16 CEPH-OSD
2155989 CEPH      20   0 11.843G 0.010T  22656 S   4.3  8.5  91:13.34 CEPH-OSD
2168721 CEPH      20   0 10.896G 0.010T  20204 S   4.6  8.0  93:46.26 CEPH-OSD
2166929 CEPH      20   0 10.458G 9.538G  11004 S   1.3  7.6  91:26.08 CEPH-OSD
2161262 CEPH      20   0 9011516 7.734G   9600 S   1.7  6.2  98:06.58 CEPH-OSD

osd heap dump info

osd.374 dumping heap profile now.
------------------------------------------------
MALLOC:    16957882712 (16172.3 MiB) Bytes in use by application
MALLOC: +            0 (    0.0 MiB) Bytes in page heap freelist
MALLOC: +    241870520 (  230.7 MiB) Bytes in central cache freelist
MALLOC: +     11673568 (   11.1 MiB) Bytes in transfer cache freelist
MALLOC: +     81639440 (   77.9 MiB) Bytes in thread cache freelists
MALLOC: +     75378880 (   71.9 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =  17368445120 (16563.8 MiB) Actual memory used (physical + swap)
MALLOC: +      3547136 (    3.4 MiB) Bytes released to OS (aka unmapped)
MALLOC:   ------------
MALLOC: =  17371992256 (16567.2 MiB) Virtual address space used
MALLOC:
MALLOC:        1110696              Spans in use
MALLOC:             47              Thread heaps in use
MALLOC:           8192              Tcmalloc page size
------------------------------------------------
Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).
Bytes released to the OS take up virtual address space but no physical memory.

osd used memory 16GB

osd dump_mempools

{
    "bloom_filter": {
        "items": 0,
        "bytes": 0
    },
    "bluestore_alloc": {
        "items": 15718400,
        "bytes": 15718400
    },
    "bluestore_cache_data": {
        "items": 0,
        "bytes": 0
    },
    "bluestore_cache_onode": {
        "items": 2747609,
        "bytes": 1846393248
    },
    "bluestore_cache_other": {
        "items": 682869756,
        "bytes": 9725848257
    },
    "bluestore_fsck": {
        "items": 0,
        "bytes": 0
    },
    "bluestore_txc": {
        "items": 60,
        "bytes": 44160
    },
    "bluestore_writing_deferred": {
        "items": 641,
        "bytes": 2639419
    },
    "bluestore_writing": {
        "items": 1585,
        "bytes": 6707995
    },
    "bluefs": {
        "items": 5971,
        "bytes": 227232
    },
    "buffer_anon": {
        "items": 16814,
        "bytes": 15956676
    },
    "buffer_meta": {
        "items": 115,
        "bytes": 10120
    },
    "osd": {
        "items": 117,
        "bytes": 1480752
    },
    "osd_mapbl": {
        "items": 0,
        "bytes": 0
    },
    "osd_pglog": {
        "items": 525725,
        "bytes": 103983300
    },
    "osdmap": {
        "items": 137722,
        "bytes": 3183664
    },
    "osdmap_mapping": {
        "items": 0,
        "bytes": 0
    },
    "pgmap": {
        "items": 0,
        "bytes": 0
    },
    "mds_co": {
        "items": 0,
        "bytes": 0
    },
    "unittest_1": {
        "items": 0,
        "bytes": 0
    },
    "unittest_2": {
        "items": 0,
        "bytes": 0
    },
    "total": {
        "items": 702024515,
        "bytes": 11722193223
    }
}

the bluestore_cache_other value is very high

what happen?


Related issues 1 (0 open1 closed)

Is duplicate of bluestore - Bug #48729: Bluestore memory leak on srub operationsResolved

Actions
Actions

Also available in: Atom PDF