Project

General

Profile

Actions

Bug #20464

closed

cache tier osd memory high memory consumption

Added by Peng Xie almost 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
jewel,kraken
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

the osd used as the cache tier in our EC cluster suffers from the high memory usage (5GB~6GB consumption per osd)
which still keeps increasing.

furthermore, we found that the high memory usage was related to the pglog, which was not recycled or trimmed.
there are no client IOs. however, the cache tier was keeping flush the dirty data into the EC backend.

after investigating the code, i found that the only place updating "pg_trim_to" cursor was on the code
path "execute_ctx()->calc_trim_to()" , that the pglog trim routine was based
on this pg_trim_to cursor.

But in the cache tier flush dirty data scenario, the cache tier osd first send
COPY_FROM req to ec pool, the ec pool prepares the COPY_GET req to send back to cache tier to fill up the
flushing data, cache tier reply COPY_GET req with the filled up data, and ec pool write the data into the backend.
finally, the cache tier got the flush success ack, clear the flushing object dirty flag in the final callback
"try_flush_mark_clean()", where it generates the pglog for the clearing operation. however, in the whole
flushing routine in the cache tier osd, there is no chance to call the calc_trim_to() to make the "pg_trim_to"
cursor updated. so, as a result, the pglog in the cache tier osd got piled up.

i have made a patch for this fix, the outline idea is we will call calc_trim_to() during simple_opc_submit()
to update the pg_trim_to. this fix works well and memory consumption is stable around 1GB.

the log i will paste below to support my analysis

during osd boot up , there are lots of the following log to support lots of pglog per osd
....
2017-06-13 18:07:53.319708 7f2a79c86800 20 read_log 885'214873 (692'173784) delete 34:1025ed46:::10002ec765d.00000000:head by osd.25.0:25806625 2017-06-10 02:24:21.396782
2017-06-13 18:07:53.319713 7f2a79c86800 20 read_log 885'214874 (712'179829) delete 34:1025ed9e:::10003065f40.00000000:head by osd.25.0:25806626 2017-06-10 02:24:21.398292
2017-06-13 18:07:53.319720 7f2a79c86800 20 read_log 885'214875 (636'130092) delete 34:1025eddc:::10001d75e5c.00000000:head by osd.25.0:25806628 2017-06-10 02:24:21.516190
2017-06-13 18:07:53.319725 7f2a79c86800 20 read_log 885'214876 (885'214817) modify 34:100de6e2:::100039059a9.00000000:head by mds.0.12806:1241643 2017-06-10 02:24:34.906434
2017-06-13 18:07:53.319730 7f2a79c86800 20 read_log 885'214877 (885'214819) modify 34:10256e6d:::10003905a87.00000000:head by mds.0.12806:1242091 2017-06-10 02:24:34.932457
2017-06-13 18:07:53.319735 7f2a79c86800 20 read_log 885'214878 (0'0) promote 34:103ef530:::10003908255.00000000:head by osd.25.0:25807317 2017-06-10 02:24:52.984780
2017-06-13 18:07:53.319740 7f2a79c86800 20 read_log 885'214879 (885'214878) modify 34:103ef530:::10003908255.00000000:head by client.75489.0:18788997 0.000000
2017-06-13 18:07:53.319745 7f2a79c86800 20 read_log 885'214880 (0'0) promote 34:10033573:::100039083b3.00000000:head by osd.25.0:25807429 2017-06-10 02:24:53.516080
......

and the following cmd also support my analysis:

[root@xt3 ceph]# ceph pg dump pgs --cluster xtao | awk '{sum+=$8};END {print sum}'
dumped pgs in format plain
41468439


Related issues 2 (0 open2 closed)

Copied to RADOS - Backport #20511: jewel: cache tier osd memory high memory consumptionResolvedWei-Chung ChengActions
Copied to RADOS - Backport #20512: kraken: cache tier osd memory high memory consumptionRejectedActions
Actions #1

Updated by Peng Xie almost 7 years ago

https://github.com/ceph/ceph/pull/16011
this is my pull request , please help to review it

Actions #2

Updated by Nathan Cutler almost 7 years ago

  • Project changed from Ceph to RADOS
  • Status changed from New to Fix Under Review
Actions #3

Updated by Kefu Chai almost 7 years ago

  • Status changed from Fix Under Review to Resolved
  • Assignee set to Peng Xie
Actions #4

Updated by Kefu Chai almost 7 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to jewel,kraken
Actions #5

Updated by Nathan Cutler almost 7 years ago

  • Copied to Backport #20511: jewel: cache tier osd memory high memory consumption added
Actions #6

Updated by Nathan Cutler almost 7 years ago

  • Copied to Backport #20512: kraken: cache tier osd memory high memory consumption added
Actions #7

Updated by Nathan Cutler over 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF