Documentation #11503
closed[Docs] Placement Groups
0%
Description
The top of the PG page [1] is completely wrong.
a) You do not create a pool in this way:ceph osd pool set {pool-name} pg_num
But you do set the number of PGs for a pool in this way:ceph osd pool set {pool-name} pg_num {pg_num}
b) PG numbers are limited by 32 per OSD
For instance, "Between 5 and 10 OSDs set pg_num to 512". This will not work. The maximum PGs I can set for 10 OSDs is 320.
[1]: http://ceph.com/docs/master/rados/operations/placement-groups/
Updated by Kefu Chai almost 9 years ago
Peter Matulis wrote:
The top of the PG page [1] is completely wrong.
b) PG numbers are limited by 32 per OSD
For instance, "Between 5 and 10 OSDs set pg_num to 512". This will not work. The maximum PGs I can set for 10 OSDs is 320.
Peter, i am able to set 1024 for a 3-OSD cluster created using vstart.sh
script. by inspecting the code, we have a setting named "mon_max_pool_pg_num" which control the maximum number for a pool. but this # is not associated with the # of OSD. following is how i create a 1024-pg pool with my little cluster.
$ ./ceph osd pool create cache_pool4 1024 *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** pool 'cache_pool4' created $ ./ceph osd pool get cache_pool4 all *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** size: 3 min_size: 1 crash_replay_interval: 0 pg_num: 1024 pgp_num: 1024 crush_ruleset: 0 auid: 0 write_fadvise_dontneed: false $ ./ceph osd ls *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** 0 1 2
[1]: http://ceph.com/docs/master/rados/operations/placement-groups/