Feature #48292
closed
cephadm: allow more than 60 OSDs per host
Added by Sebastian Wagner over 3 years ago.
Updated almost 3 years ago.
Description
If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to assign sufficient ports for Ceph OSDs. The default (6800-7300) currently allows for no more than 62 OSDs per host.
For cluster with dense nodes like this please adjust the setting "ms_bind_port_max" to a suitable value. Each OSD will consume 8 additional ports.
For example, given a host that is set to run 96 OSDs, 768 ports will be needed. "ms_bind_port_max" should be set at least to 7568 by running
ceph config set osd.* ms_bind_port_max 7568.
- Tags set to low-hanging-fruit
Sebastian Wagner wrote:
If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to assign sufficient ports for Ceph OSDs. The default (6800-7300) currently allows for no more than 62 OSDs per host.
For cluster with dense nodes like this please adjust the setting "ms_bind_port_max" to a suitable value. Each OSD will consume 8 additional ports.
For example, given a host that is set to run 96 OSDs, 768 ports will be needed. "ms_bind_port_max" should be set at least to 7568 by running
[...]
Can I work on this issue?
- Assignee set to Sejal Welankar
- Related to Bug #50526: OSD massive creation: OSDs not created added
- Related to Bug #47873: /usr/lib/sysctl.d/90-ceph-osd.conf getting installed in container, rendering it ineffective added
- Assignee deleted (
Sejal Welankar)
- Assignee set to Sebastian Wagner
- Pull request ID set to 42210
- Status changed from New to Fix Under Review
- Status changed from Fix Under Review to Resolved
Also available in: Atom
PDF