Support #15337
closedOSDs are very unbalanced
0%
Description
Hi,
we are experiencing that our OSDS are very unbalanced, some of them are filled to 50% and some others go up to 93%, I have attached the details (ceph osd tree/ceph osd df/crushmap)
For the cluster story, it have started we 3 hosts and 9 OSDs with firefly version and now it have 50 OSDs across 13 hosts with hammer (0.94.6) on ubuntu trusty. (We have used giant for while between)
The weight of each OSDs was ~ 1.0 for 1T, but we started to have trouble about data repartition, we got our first OSD full and then start reducing the weight to free space on full OSDs.
After some reading, we discover that we should enable the tunable "straw_calc_version = 1" to fix the situation, we do, got a big data rebalancing.
But the data repartition was not better.
So we try to use straw2 instead of straw1. The data have moved a lot too but we are still experiencing a bad data repartition.
Cheers,
Files
Updated by Mehdi Abaakouk about 8 years ago
- File ceph-pg-dump.json.txt.gz ceph-pg-dump.json.txt.gz added
Updated by Anonymous about 8 years ago
As per a discussion with sage, can you make a try with ceph osd reweight-by-utilization ?
Updated by Anonymous about 8 years ago
Erwan Velu wrote:
As per a discussion with sage, can you make a try with ceph osd reweight-by-utilization ?
I didn't saw sage's comment before posting that. Please ignore it for now.
Updated by Mehdi Abaakouk about 8 years ago
- File ceph-osd-dump.txt ceph-osd-dump.txt added
Updated by Anonymous about 8 years ago
Sage, what's your opinion on that very unbalanced OSDs ?
Updated by Greg Farnum about 7 years ago
- Tracker changed from Bug to Support
- Status changed from New to Closed
Lots of ongoing improvements to CRUSH around this.