Total amount to PG's is more than 100%
I've upgraded to Nautilus a week ago and visited the Dashboard.
The total amount of PG's is around way over 100% in the "pg's clean" and so on pie graph
The autoscale-status also report strange RATIO:
ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 17680M 4.0 34465G 0.0020 1.0 8 warn
cephfs_data_reduced 16485G 2.0 34465G 0.9567 1.0 375 warn
cephfs_data 6405G 3.0 34465G 0.5575 1.0 250 warn
I'm happy to assist with more information.
#4 Updated by Christoffer Lilja about 3 years ago
I don't know how to reproduce this.
My CephFS was created in Jewel and my cluster is upgraded from there.
I first created the Metadata pool and the cephfs data pool. Later on (still in Jewel) I created the "reduced" data pool and mapped that folder to the pool using the ceph.dir.layout.
I guess that CephFS counts the total usage wrong using two data pools?