Project

General

Profile

Actions

Bug #5081

closed

Data migration and recover slow after changed OSD weight

Added by Ivan Kudryavtsev almost 11 years ago. Updated almost 11 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm using ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)

Now, I'm trying to make some osds with small amount of space to free. I have OSDs with 500GB, 1TB, 2TB drives, so I decided to set weight of OSD accordingly: 0.25 0.5 1 and everything goes well. Data moving as I expect.

I do command to change weight:

ceph osd crush set 8 osd.8 0.25 pool=default host=ceph-osd-1-1

But, when I do it, first time it works like a hell, about 300-400 MB every second moving and after some time it starts to work very slowly (see attached file, ceph -w after some time), when it moves about 60% of all amount of data (load and traffic decreased dramatically), so I think this behavior is not absolutely correct.

As you can see, it does not move data during some records at all.


Files

ceph.log (18.8 KB) ceph.log Ivan Kudryavtsev, 05/15/2013 09:06 PM
Actions

Also available in: Atom PDF