Bug #17169
closedWrong check of the limitation of creating PGs
0%
Description
When I creating PGs, I get the error msg that ceph-cli would check the number of pg created on one OSD.
Files
Updated by Desmond Shih over 7 years ago
- File Selection_055.png Selection_055.png added
They say, in default, the limitation of number of pg created on one OSD is 32.
So, I tried some example and found out the wrong check.
I have 3 OSDs, and I have the pool 'rbd' that have 8 pgs in default.
The limitation of creating pgs should be: 3 * 32 = 96(pgs).
It should show error message when I set pg of pool more than 104(8 + 96).
But it showed error when I set to 107 but passed by 106.
Updated by huang jun over 7 years ago
the code shows that:
(106-8)/3 = 32, which is not bigger than mon_osd_max_split_count(default:32);
so you can set it to 106;
but when you set to 107:
(107-8)/3 = 33, which is bigger than 32. so it reports error.
Updated by Desmond Shih over 7 years ago
Hi hung jun,
There have some misunderstanding.
In a cluster which have 3 OSDs, it could create new 96(32*3) pgs at once.
So, a pool which have 8 pgs at begining grows up to 106(8 + 98) should be failed.(98 > 96)
Growing up to 104(8 + 96) should be passed.
But it pass by growing pgs to 106 in original that should be fixed.
In source code, it store the value of new pgs for each OSDs to INTEGER. It ignore the floating point that will lead to wrong result.
Updated by Kefu Chai over 7 years ago
- Status changed from New to Fix Under Review
Updated by Kefu Chai over 7 years ago
- Status changed from Fix Under Review to Resolved
technically, this change applies to hammer and jewel. but it's not critical. so i am marking it resolved.