Project

General

Profile

Actions

Bug #53442

open

mgr: pg_num scales beyond pgp_num

Added by Sage Weil over 2 years ago. Updated over 1 year ago.

Status:
Pending Backport
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

in a large cluster we can get pools like

pool 9 'cephfs.a.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32768 pgp_num 1 pgp_num_target 32768 autoscale_mode on last_change 78007 lfor 0/0/78007 flags hashpspool stripe_width 0 application cephfs

due do the pg_num scaling. This then makes us hit the per-osd pg limits.


Related issues 2 (0 open2 closed)

Copied to mgr - Backport #53492: pacific: mgr: pg_num scales beyond pgp_numResolvedCory SnyderActions
Copied to mgr - Backport #53493: octopus: mgr: pg_num scales beyond pgp_numResolvedCory SnyderActions
Actions #1

Updated by Sage Weil over 2 years ago

  • Backport set to pacific,octopus
Actions #2

Updated by Neha Ojha over 2 years ago

  • Pull request ID set to 44155
Actions #3

Updated by Sage Weil over 2 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #4

Updated by Backport Bot over 2 years ago

  • Copied to Backport #53492: pacific: mgr: pg_num scales beyond pgp_num added
Actions #5

Updated by Backport Bot over 2 years ago

  • Copied to Backport #53493: octopus: mgr: pg_num scales beyond pgp_num added
Actions #6

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions

Also available in: Atom PDF