Documentation #47176
open
creating pool doc is very out-of-date
Added by Patrick Donnelly over 3 years ago.
Updated 3 months ago.
Description
https://docs.ceph.com/docs/master/rados/operations/pools/#create-a-pool
Ideally we shouldn't be talking about PGs much when it comes to creating pools. I think now the recommendation is to set the target size (post-creation) and let the pg_autoscaler worry about changing the PGs. Right? In fact, setting the pg_num on a pool during creation will just get overridden usually by the pg_autoscaler immediately afterwards (unless turned off explicitly).
FWIW, some admins (including me) have found the PG autoscaler to decrease `pgp_num` far too aggressively, and have seen cluster errors as a result of PG merging.
- Pull request ID set to 50639
https://docs.ceph.com/en/latest/rados/operations/pools/#create-a-pool contains the following text:
Placement Groups: You can set the number of placement groups (PGs) for the pool. In a typical configuration, the target number of PGs is approximately one hundred PGs per OSD. This provides reasonable balancing without consuming excessive computing resources. When setting up multiple pools, be careful to set an appropriate number of PGs for each pool and for the cluster as a whole. Each PG belongs to a specific pool: when multiple pools use the same OSDs, make sure that the sum of PG replicas per OSD is in the desired PG-per-OSD target range. To calculate an appropriate number of PGs for your pools, use the pgcalc tool.
I have a couple of questions about this:
- Is the suggested target number of PGs per OSD (100) still a good suggestion?
- The pgcalc link is broken. I don't know if the pgcalc tool is available anywhere. Does anyone know?
See also https://github.com/ceph/ceph/pull/55447 - this removes instructions to change certain settings
See also https://github.com/ceph/ceph/pull/55419 - this relates to autoscaler
Also available in: Atom
PDF