Bug #41536
closed
Total amount to PG's is more than 100%
Added by Christoffer Lilja over 4 years ago.
Updated about 3 years ago.
Category:
Component - Landing Page
Description
I've upgraded to Nautilus a week ago and visited the Dashboard.
The total amount of PG's is around way over 100% in the "pg's clean" and so on pie graph
The autoscale-status also report strange RATIO:
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 17680M 4.0 34465G 0.0020 1.0 8 warn
cephfs_data_reduced 16485G 2.0 34465G 0.9567 1.0 375 warn
cephfs_data 6405G 3.0 34465G 0.5575 1.0 250 warn
I'm happy to assist with more information.
- Project changed from Ceph to mgr
Hi, do you know how I can replicate this?
In both master and nautilus I'm getting the correct information.
I don't know how to reproduce this.
My CephFS was created in Jewel and my cluster is upgraded from there.
I first created the Metadata pool and the cephfs data pool. Later on (still in Jewel) I created the "reduced" data pool and mapped that folder to the pool using the ceph.dir.layout.
I guess that CephFS counts the total usage wrong using two data pools?
- Status changed from New to Fix Under Review
- Backport set to nautilus
- Pull request ID set to 30343
Tiago, were you able to reproduce the ratio bug in "ceph osd pool autoscale-status" as well?
- Assignee set to Tiago Melo
- Target version set to v15.0.0
- Is duplicate of Bug #41234: More than 100% in a dashboard PG Status added
- Status changed from Fix Under Review to Pending Backport
- Copied to Backport #41809: nautilus: Total amount to PG's is more than 100% added
- Category changed from 138 to 166
- Status changed from Pending Backport to Resolved
- Project changed from mgr to Dashboard
- Category changed from 166 to Component - Landing Page
Also available in: Atom
PDF