Project

General

Profile

Documentation #40473

enhance db sizing

Added by Torben Hørup almost 5 years ago. Updated 12 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

sizing section doesn't mention rocksdb extents, which are essential to understand how much of a db partition will actually be used.

especially with nautilus, that warns on bluestore spill over. (see comments on #38745 )

History

#1 Updated by Lars Täuber over 4 years ago

I had the idea of opening a new ticket but this seems to be the same topic.

Here is what I wanted to request:

clarify documentation of size for bluestore block.db and block.wal

Please could a developer that knows the blustore/RocksDB construct well enough correct or/and clarify the correct sizing of the block.db.

There still is the 4% rule for block.db in the docs:
https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing
That seems very irritating.

This Wikis says something about steps RocksDB is using:
http://yourcmc.ru/wiki/Ceph_performance#About_block.db_sizing

What about these variables:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036553.html
max_write_buffer_number
min_write_buffer_number_to_merge
write_buffer_size

What is flushed to where? This looks like the wal could also be configured to use more than just 1 GB of SSD.

Can RocksDB use multiple Lx extents (e.g. multiple L3 extents)?
Are the sizes for RocksDB layers mention in the yourcmc.ru/wiki the used default values for bluestore (in mimic/nautilus)?
Are the sizes used for the layes by rocksdb configurable?

Thanks

#2 Updated by Neha Ojha over 4 years ago

  • Status changed from New to In Progress

We have started making some improvements in https://github.com/ceph/ceph/pull/32226.

#3 Updated by Igor Fedotov 12 months ago

  • Status changed from In Progress to Resolved

Also available in: Atom PDF