Project

General

Profile

Actions

Bug #52640

open

when osds out,reduce pool size reports a error "Error ERANGE: pool id # pg_num 256 size 1 would mean 1280 total pgs, which exceeds max 1200 (mon_max_pg_per_osd 300 * num_in_osds 4)"

Added by lei cao over 2 years ago. Updated over 2 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
performance
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

At first,my cluster have 6 osds,3 pools which pgnum are all 256 and size are 2,mon_max_pg_per_osd is 300.So ,we need 1536 pgs < 1800.
Then 2 osd disks have bad sector, so we mark them out. To prevent too much pgs for 4 osds, we want to reduce pool size temporarily, but cmd reports a error "Error ERANGE: pool id # pg_num 256 size 1 would mean 1280 total pgs, which exceeds max 1200 (mon_max_pg_per_osd 300 * num_in_osds 4)"

Actions #2

Updated by Neha Ojha over 2 years ago

  • Status changed from New to Need More Info

We can workaround this by temporarily increasing mon_max_pg_per_osd, right?

Actions #3

Updated by lei cao over 2 years ago

global parms like mon_max_pg_per_osd can be update at runtime?It seems that mon_max_pg_per_osd is not be observed by mgr.

Actions #4

Updated by Neha Ojha over 2 years ago

lei cao wrote:

global parms like mon_max_pg_per_osd can be update at runtime?It seems that mon_max_pg_per_osd is not be observed by mgr.

This config option is injectable at runtime, but is missing "flag: runtime"(needs to be fixed), which is why you see "not observed by mgr"

Actions

Also available in: Atom PDF