Project

General

Profile

Actions

Support #53432

closed

How to use and optimize ceph dpdk

Added by chunsong feng over 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Documentation
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Component(RADOS):
Messenger
Pull request ID:

Description

Write a CEPH DPDK enabling guide and place it in doc/dev. The document contains the following contents:
1. Compilation
Install dpdk and dpdk-devel, and compile do_cmake.sh -DWITH_DPDK=ON.

2. Deployment
You are advised to use the NIC sriov function. Each OSD occupies a VF.
echo $numvfs > /sys/class/net/$port/device/sriov_numvfs
The ms_dpdk_port_options will be added to specify the vf and bond ports.

modify /etc/passwd to give ceph user root privilege and /var/run folder write
ceph:x:0:0:Ceph storage service:/var/lib/ceph:/bin/false:/var/run
modprobe vfio
modprobe vfio_pci
dpdk-devbind.py -b vfio-pci 0000:7d:01.0
Configuring Hugepage
vm.nr_hugepages = xxx

3. CEPH Configuration
[osd]
ms_type=async+dpdk
ms_async_op_threads=1

ms_dpdk_port_id=0
ms_dpdk_gateway_ipv4_addr=172.19.36.1
ms_dpdk_netmask_ipv4_addr=255.255.255.0
ms_dpdk_hugepages=/dev/hugepages
ms_dpdk_hw_flow_control=false
ms_dpdk_lro=false
ms_dpdk_hw_queue_weight=1
ms_dpdk_memory_channel=4
ms_dpdk_debug_allow_loopback = true

[osd.x]
ms_dpdk_coremask=0xf0
ms_dpdk_host_ipv4_addr=172.19.36.51
public_addr=172.19.36.51
cluster_addr=172.19.36.51
ms_dpdk_port_options=--allow=0000:7d:01.1

4. Debug and optimization
debug_dpdk
debug_ms

Actions #1

Updated by Kefu Chai about 2 years ago

  • Status changed from New to Resolved
  • Pull request ID set to 44292
Actions

Also available in: Atom PDF