How Many OSDs Can I Run per Host


Theoretically, a host can run as many OSDs as the hardware can support. Many vendors market storage hosts that have large numbers of drives (e.g., 36 drives) capable of supporting many OSDs. We don’t recommend a huge number of OSDs per host though. Ceph was designed to distribute the load across what we call “failure domains.” See CRUSH Maps for details.

At the petabyte scale, hardware failure is an expectation, not a freak occurrence. Failure domains include datacenters, rooms, rows, racks, and network switches. In a single host, power supplies, motherboards, NICs, and drives are all potential points of failure.

If you place a large percentage of your OSDs on a single host and that host fails, a large percentage of your OSDs will fail too. Having too large a percentage of a cluster’s OSDs on a single host can cause disruptive data migration and long recovery times during host failures. We encourage diversifying the risk across failure domains, and that includes making reasonable tradeoffs regarding the number of OSDs per host.