Project

General

Profile

Actions

Bug #41756

open

mgr/pg_autoscaler: Recommends wrong PG count

Added by Ashley Merrick over 4 years ago. Updated about 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
pg_autoscaler module
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

On a 30 OSD EC Pool (8+2) the auto-scale recommends a PG count of 2048 instead of a sane 256-512 amount

ceph osd pool autoscale-status
 POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE
 ecpool                 104.6T               1.25        272.8T  0.4794        0.9000   1.0     512        2048  warn
 device_health_metrics  876.4k                3.0        92124M  0.0000                 1.0       1           4  warn
 ecmeta                  3461k                3.0        92124M  0.0001                 1.0     100           4  warn

    272.87100 root ec
 -5        90.95700     host sn-s01
  3   hdd   9.09599         osd.3       up  1.00000 1.00000
  4   hdd   9.09599         osd.4       up  1.00000 1.00000
  5   hdd   9.09599         osd.5       up  1.00000 1.00000
  6   hdd   9.09599         osd.6       up  1.00000 1.00000
  7   hdd   9.09599         osd.7       up  1.00000 1.00000
  8   hdd   9.09599         osd.8       up  1.00000 1.00000
  9   hdd   9.09599         osd.9       up  1.00000 1.00000
 10   hdd   9.09599         osd.10      up  1.00000 1.00000
 11   hdd   9.09599         osd.11      up  1.00000 1.00000
 12   hdd   9.09599         osd.12      up  1.00000 1.00000
 -9        90.95700     host sn-s02
 13   hdd   9.09599         osd.13      up  1.00000 1.00000
 14   hdd   9.09599         osd.14      up  1.00000 1.00000
 15   hdd   9.09599         osd.15      up  1.00000 1.00000
 16   hdd   9.09599         osd.16      up  1.00000 1.00000
 17   hdd   9.09599         osd.17      up  1.00000 1.00000
 18   hdd   9.09599         osd.18      up  1.00000 1.00000
 19   hdd   9.09599         osd.19      up  1.00000 1.00000
 20   hdd   9.09599         osd.20      up  1.00000 1.00000
 21   hdd   9.09599         osd.21      up  1.00000 1.00000
 22   hdd   9.09599         osd.22      up  1.00000 1.00000
-17        90.95700     host sn-s03
 29   hdd   9.09499         osd.29      up  1.00000 1.00000
 30   hdd   9.09499         osd.30      up  1.00000 1.00000
 31   hdd   9.09499         osd.31      up  1.00000 1.00000
 32   hdd   9.09499         osd.32      up  1.00000 1.00000
 33   hdd   9.09499         osd.33      up  1.00000 1.00000
 34   hdd   9.09499         osd.34      up  1.00000 1.00000
 35   hdd   9.09499         osd.35      up  1.00000 1.00000
 36   hdd   9.09499         osd.36      up  1.00000 1.00000
 37   hdd   9.09499         osd.37      up  1.00000 1.00000
 38   hdd   9.09499         osd.38      up  1.00000 1.00000
Actions #1

Updated by Sebastian Wagner over 4 years ago

  • Category changed from ceph-mgr to pg_autoscaler module
Actions #2

Updated by Sebastian Wagner over 4 years ago

  • Description updated (diff)
Actions #3

Updated by Sebastian Wagner over 4 years ago

  • Description updated (diff)
Actions #4

Updated by Sebastian Wagner over 4 years ago

  • Subject changed from mgr/autoscale to mgr/pg_autoscaler: Recommends wrong PG count
Actions #5

Updated by Nathan Cutler over 4 years ago

  • Backport set to nautilus
Actions #6

Updated by Brian Koebbe about 4 years ago

Same issue here:

See (duplicate issue?) #41183#note-2

Actions

Also available in: Atom PDF