Feature #48292
closedcephadm: allow more than 60 OSDs per host
0%
Description
If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to assign sufficient ports for Ceph OSDs. The default (6800-7300) currently allows for no more than 62 OSDs per host.
For cluster with dense nodes like this please adjust the setting "ms_bind_port_max" to a suitable value. Each OSD will consume 8 additional ports.
For example, given a host that is set to run 96 OSDs, 768 ports will be needed. "ms_bind_port_max" should be set at least to 7568 by running
ceph config set osd.* ms_bind_port_max 7568.
Updated by Sejal Welankar about 3 years ago
Sebastian Wagner wrote:
If the cluster is set to have very dense nodes (>60 OSDs per host) please make sure to assign sufficient ports for Ceph OSDs. The default (6800-7300) currently allows for no more than 62 OSDs per host.
For cluster with dense nodes like this please adjust the setting "ms_bind_port_max" to a suitable value. Each OSD will consume 8 additional ports.
For example, given a host that is set to run 96 OSDs, 768 ports will be needed. "ms_bind_port_max" should be set at least to 7568 by running[...]
Can I work on this issue?
Updated by Sebastian Wagner almost 3 years ago
- Related to Bug #50526: OSD massive creation: OSDs not created added
Updated by Sebastian Wagner almost 3 years ago
- Related to Bug #47873: /usr/lib/sysctl.d/90-ceph-osd.conf getting installed in container, rendering it ineffective added
Updated by Sebastian Wagner almost 3 years ago
- Assignee deleted (
Sejal Welankar)
Updated by Sebastian Wagner almost 3 years ago
- Assignee set to Sebastian Wagner
- Pull request ID set to 42210
Updated by Sebastian Wagner almost 3 years ago
- Status changed from New to Fix Under Review
Updated by Kefu Chai almost 3 years ago
- Status changed from Fix Under Review to Resolved