Actions
Bug #54166
openceph version 15.2.15, osd configuration osd_op_num_shards_ssd or osd_op_num_threads_per_shard_ssd does not take effect
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Configure osd_op_num_shards_ssd=8 or osd_op_num_threads_per_shard_ssd=8 in ceph.config, use ceph daemin osd.x config show to check that the configuration is successful.
Use the fio command (fio -direct=1 -iodepth=64 -rw=randwrite -ioengine=libaio -bs=4k -size=100G -numjobs=32 -runtime=60 -group_reporting -filename=/dev/rbd0 -name=Rand_Write_Testing )
The cpu usage of the osd process will not exceed 200% when stress testing the rbd device
,top -H -p ceph-osd.pid -n 1|grep tp_osd_tp long-term observation also has a maximum of one thread
Configure osd_op_num_shards=8, the cpu usage rate of osd process can reach 800%, and the crush_class of osd is SSD
Updated by Neha Ojha about 2 years ago
- Project changed from Ceph to RADOS
- Assignee set to Sridhar Seshasayee
Sridhar, can you please take a look?
Actions