Documentation #24712
Memory recommendations for bluestore
% Done:
0%
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:
Description
We configured the following setting for our luminous/bluestore osds.
[osd]
bluestore_cache_size_hdd = 6442450944
bluestore_cache_size_ssd = 6442450944
Even tough we see the following significantly higher memory usage:
top - 10:24:06 up 64 days, 21:53, 1 user, load average: 3.45, 2.73, 2.63
Tasks: 777 total, 1 running, 776 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.9 us, 1.6 sy, 0.0 ni, 94.3 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 19798443+total, 55776476 free, 13814640+used, 4061552 buff/cache
KiB Swap: 1949692 total, 1949692 free, 0 used. 49839068 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6415 ceph 20 0 9.942g 8.617g 29200 S 14.9 4.6 6311:11 ceph-osd
5034 ceph 20 0 9998.2m 8.538g 29312 S 14.5 4.5 5407:01 ceph-osd
5514 ceph 20 0 9997.0m 8.443g 29288 S 12.5 4.5 6289:37 ceph-osd
7185 ceph 20 0 9.831g 8.370g 29120 S 16.5 4.4 4995:51 ceph-osd
6199 ceph 20 0 9889.2m 8.353g 29388 S 11.9 4.4 4989:51 ceph-osd
5274 ceph 20 0 9785.1m 8.297g 29160 S 24.4 4.4 5354:22 ceph-osd
4906 ceph 20 0 9810.5m 8.262g 29076 S 20.5 4.4 6910:31 ceph-osd
5392 ceph 20 0 9824.2m 8.244g 29396 S 25.7 4.4 6201:31 ceph-osd
5774 ceph 20 0 9780.4m 8.144g 29084 S 44.6 4.3 7203:01 ceph-osd
6013 ceph 20 0 9991160 8.130g 29312 S 17.5 4.3 6710:32 ceph-osd
5151 ceph 20 0 9949548 8.085g 29192 S 16.2 4.3 6241:40 ceph-osd
2093553 ceph 20 0 9938148 8.071g 29268 S 13.9 4.3 2212:12 ceph-osd
6958 ceph 20 0 9558004 7.951g 29452 S 12.2 4.2 4013:14 ceph-osd
6678 ceph 20 0 9774736 7.933g 29084 S 31.0 4.2 3326:05 ceph-osd
6797 ceph 20 0 9715632 7.880g 29260 S 20.5 4.2 4352:13 ceph-osd
5888 ceph 20 0 9365472 7.773g 29128 S 16.2 4.1 4529:36 ceph-osd
...
I discovered the following documentation regarding OSD memory usage:
http://docs.ceph.com/docs/luminous/start/hardware-recommendations/
http://docs.ceph.com/docs/luminous/rados/configuration/bluestore-config-ref/
I would be happy to have more detailed documentation, my suggestions:
- describe additional memory which is needed per osd despite bluestore_cache_size_* (i ran in OOM problems because of this unexpected memory usage)
- the hardware recommendations describes a rule of thumb of 1 GB ram per TB disk, this should be a bit more precise compared to the bluestore documentation
- describe the different defaults for bluestore_cache_size_* (why is ssd higher that hdd)
- describe the rules of thumb of of the diferent types of disks/ssds (is it really a good idea to to add 20GB bluestore cache for a 20TB SATA disk? i suppose that the neccessary cache is not always proportional to the disk size)
- describe how to analyze the bluestore hitrate for getting a good sizing of the bluestore cache