Project

General

Profile

Actions

Bug #8675

closed

Unnecessary remapping/backfilling?

Added by Dmitry Smirnov almost 10 years ago. Updated almost 7 years ago.

Status:
Won't Fix
Priority:
Low
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

A little experiment: on my cluster I marked two OSDs from different hosts as "out".
After some time when remapping finished I stopped both OSDs and swapped them between hosts (with moving corresponding sections of ceph.conf and physical relocation of hard disks).
When OSDs started on their new hosts ~10% of cluster became remapping+backfilling but no data were flowing to or from those (empty) OSDs because they were still "out".
From common sense prospective rebalancing data among other 11 OSD was entirely unnecessary.
I hope this can be improved. Is it possible to make crush a little bit more intelligent?

Actions #1

Updated by Sage Weil about 7 years ago

  • Status changed from New to Won't Fix
Actions #2

Updated by Greg Farnum about 7 years ago

CRUSH improvements are a continuously ongoing discussion, and it's being improved right now.

Actions #3

Updated by Greg Farnum almost 7 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (10)
Actions

Also available in: Atom PDF