Project

General

Profile

Actions

Bug #22440

closed

New pgs per osd hard limit can cause peering issues on existing clusters

Added by Nick Fisk over 6 years ago. Updated about 6 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

During upgrade of OSD's in a cluster from Filestore to Bluestore, the CRUSH layout changed in my cluster. This resulted in a number of PG's stuck in an activating+remapped state and were blocking IO. It wasn't initially obvious what was causing the PG's not to be able to complete the peering process.

Finally I discovered in one of the OSD logs the following:
osd.68 106454 maybe_wait_for_max_pg withhold creation of pg 0.1cf: 403 >= 400

Although the average PG's per OSD across the whole cluster was only just over 200, this new node had much larger disks and so took a much greater share of PG's. With other OSD's being taken out of service to be upgraded to Bluestore, this pushed the number over 400 PG's for this OSD and caused a huge number of blocked requests.

Should this new hard pg limit apply to existing PG's which are only being "created" on an OSD purely due to PG's moving due to CRUSH change? Or should it only apply if the PG's are being created for the 1st time when creating a new pool or increasing size of existing one?

I can see several scenarios where if a user is near the limit and a host/rack failure is encountered, that a large number of PG's could be blocked from peering because of this new limit, reducing the availability of the cluster.

Please close if this is believed to be normal behavior.


Related issues 1 (0 open1 closed)

Has duplicate RADOS - Feature #22973: log lines when hitting "pg overdose protection"Duplicate02/09/2018

Actions
Actions

Also available in: Atom PDF