Project

General

Profile

Bug #41536

Total amount to PG's is more than 100%

Added by Christoffer Lilja about 1 year ago. Updated 11 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
dashboard/landingpage
Target version:
% Done:

0%

Source:
Tags:
Backport:
nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Pull request ID:
Crash signature:

Description

I've upgraded to Nautilus a week ago and visited the Dashboard.
The total amount of PG's is around way over 100% in the "pg's clean" and so on pie graph

The autoscale-status also report strange RATIO:
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 17680M 4.0 34465G 0.0020 1.0 8 warn
cephfs_data_reduced 16485G 2.0 34465G 0.9567 1.0 375 warn
cephfs_data 6405G 3.0 34465G 0.5575 1.0 250 warn

I'm happy to assist with more information.


Related issues

Duplicates mgr - Bug #41234: More than 100% in a dashboard PG Status Duplicate 08/13/2019
Copied to mgr - Backport #41809: nautilus: Total amount to PG's is more than 100% Resolved

History

#1 Updated by Greg Farnum about 1 year ago

  • Project changed from Ceph to mgr

#2 Updated by Ricardo Dias about 1 year ago

  • Category set to dashboard/osds

#3 Updated by Tiago Melo about 1 year ago

Hi, do you know how I can replicate this?
In both master and nautilus I'm getting the correct information.

#4 Updated by Christoffer Lilja about 1 year ago

I don't know how to reproduce this.

My CephFS was created in Jewel and my cluster is upgraded from there.
I first created the Metadata pool and the cephfs data pool. Later on (still in Jewel) I created the "reduced" data pool and mapped that folder to the pool using the ceph.dir.layout.

I guess that CephFS counts the total usage wrong using two data pools?

#5 Updated by Tiago Melo about 1 year ago

  • Status changed from New to Fix Under Review
  • Backport set to nautilus
  • Pull request ID set to 30343

#6 Updated by Tiago Melo about 1 year ago

I was able to reproduce it by creating a replicated pool with a replicated size different than the number of OSDs.
I have created a PR to fix the issue.
https://github.com/ceph/ceph/pull/30343

#7 Updated by Christoffer Lilja about 1 year ago

Tiago, were you able to reproduce the ratio bug in "ceph osd pool autoscale-status" as well?

#8 Updated by Lenz Grimmer about 1 year ago

  • Assignee set to Tiago Melo
  • Target version set to v15.0.0

#9 Updated by Lenz Grimmer about 1 year ago

  • Duplicates Bug #41234: More than 100% in a dashboard PG Status added

#10 Updated by Tiago Melo about 1 year ago

  • Status changed from Fix Under Review to Pending Backport

#11 Updated by Nathan Cutler about 1 year ago

  • Copied to Backport #41809: nautilus: Total amount to PG's is more than 100% added

#12 Updated by Lenz Grimmer about 1 year ago

  • Category changed from dashboard/osds to dashboard/landingpage

#13 Updated by Lenz Grimmer 11 months ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF