Project

General

Profile

Actions

Bug #41183

closed

pg autoscale on EC pools

Added by imirc tw almost 5 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
EC Pools
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The pg_autoscaler plugin wants to seriously increase num_pg on my EC pool from 8192 to 65536, but it seems it doesn't account for the extra shards created on other OSDS which leads to much more PG-per-OSD as configured at 100.

For example, I've got 624 osds with 8192 pg_num on the EC 6+2 pool. My calculation is that this results in (6+2)*8192 / 624 = ~105 PG-per-OSD (which is also observed).
Following the pg_autoscaler suggestion this would lead to 65536 PG's and ~840 PG-per-osd, which seems crazy.

Can anyone explain what I am missing here or is this a autoscaler miscalculation? The logic it uses is the size 1.33 (33% overhead) which it uses to calculate an expected 1.33 * 65536 / 624 = ~139 PG-per-OSD.

Actions

Also available in: Atom PDF