Project

General

Profile

Actions

Bug #41536

closed

Total amount to PG's is more than 100%

Added by Christoffer Lilja over 4 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Component - Landing Page
Target version:
% Done:

0%

Source:
Tags:
Backport:
nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I've upgraded to Nautilus a week ago and visited the Dashboard.
The total amount of PG's is around way over 100% in the "pg's clean" and so on pie graph

The autoscale-status also report strange RATIO:
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 17680M 4.0 34465G 0.0020 1.0 8 warn
cephfs_data_reduced 16485G 2.0 34465G 0.9567 1.0 375 warn
cephfs_data 6405G 3.0 34465G 0.5575 1.0 250 warn

I'm happy to assist with more information.


Related issues 2 (0 open2 closed)

Is duplicate of Dashboard - Bug #41234: More than 100% in a dashboard PG StatusDuplicate

Actions
Copied to mgr - Backport #41809: nautilus: Total amount to PG's is more than 100%ResolvedTiago MeloActions
Actions #1

Updated by Greg Farnum over 4 years ago

  • Project changed from Ceph to mgr
Actions #2

Updated by Ricardo Dias over 4 years ago

  • Category set to 138
Actions #3

Updated by Tiago Melo over 4 years ago

Hi, do you know how I can replicate this?
In both master and nautilus I'm getting the correct information.

Actions #4

Updated by Christoffer Lilja over 4 years ago

I don't know how to reproduce this.

My CephFS was created in Jewel and my cluster is upgraded from there.
I first created the Metadata pool and the cephfs data pool. Later on (still in Jewel) I created the "reduced" data pool and mapped that folder to the pool using the ceph.dir.layout.

I guess that CephFS counts the total usage wrong using two data pools?

Actions #5

Updated by Tiago Melo over 4 years ago

  • Status changed from New to Fix Under Review
  • Backport set to nautilus
  • Pull request ID set to 30343
Actions #6

Updated by Tiago Melo over 4 years ago

I was able to reproduce it by creating a replicated pool with a replicated size different than the number of OSDs.
I have created a PR to fix the issue.
https://github.com/ceph/ceph/pull/30343

Actions #7

Updated by Christoffer Lilja over 4 years ago

Tiago, were you able to reproduce the ratio bug in "ceph osd pool autoscale-status" as well?

Actions #8

Updated by Lenz Grimmer over 4 years ago

  • Assignee set to Tiago Melo
  • Target version set to v15.0.0
Actions #9

Updated by Lenz Grimmer over 4 years ago

  • Is duplicate of Bug #41234: More than 100% in a dashboard PG Status added
Actions #10

Updated by Tiago Melo over 4 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #11

Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #41809: nautilus: Total amount to PG's is more than 100% added
Actions #12

Updated by Lenz Grimmer over 4 years ago

  • Category changed from 138 to 166
Actions #13

Updated by Lenz Grimmer over 4 years ago

  • Status changed from Pending Backport to Resolved
Actions #14

Updated by Ernesto Puerta about 3 years ago

  • Project changed from mgr to Dashboard
  • Category changed from 166 to Component - Landing Page
Actions

Also available in: Atom PDF