Bug #5081
closedData migration and recover slow after changed OSD weight
0%
Description
I'm using ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
Now, I'm trying to make some osds with small amount of space to free. I have OSDs with 500GB, 1TB, 2TB drives, so I decided to set weight of OSD accordingly: 0.25 0.5 1 and everything goes well. Data moving as I expect.
I do command to change weight:
ceph osd crush set 8 osd.8 0.25 pool=default host=ceph-osd-1-1
But, when I do it, first time it works like a hell, about 300-400 MB every second moving and after some time it starts to work very slowly (see attached file, ceph -w after some time), when it moves about 60% of all amount of data (load and traffic decreased dramatically), so I think this behavior is not absolutely correct.
As you can see, it does not move data during some records at all.
Files
Updated by Sage Weil almost 11 years ago
btw, simpler to do 'ceph osd crush reweight osd.8 .25'
it is normal to have a bit of a long tail. also note that the throughput stats that are reported are very approximate/rough, so if only 1 pg is doing work it may appear to stutter.
Updated by Sage Weil almost 11 years ago
- Status changed from New to Can't reproduce
If you want help debugging your system performance, you should probably be engaging inktank professional services...