Project

General

Profile

Feature #1067

mkcephfs: magically group osds on same host into subtrees in the generated crush map

Added by Sage Weil over 9 years ago. Updated almost 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
OSD
Target version:
% Done:

0%

Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

In theory we can look at the host field in the ceph.conf to generate a sane crushmap by default. As things stand, anyone running multiple cosd's on the same node has to adjust the crush map manually, which is tedious.

This could be extended (later) to do multiple levels of the hierarchy if there were other magic fields defined (in ceph.conf or elsewhere). e.g.,

[osd.1234]
host = abcdef
rack = ghi
row = jkl

Really not sure that the conf is the right place to do this, though (although it's certainly the simplest, at least right now).

History

#1 Updated by Sage Weil over 9 years ago

  • Target version changed from v0.29 to v0.30
  • translation missing: en.field_position set to 1

#2 Updated by Sage Weil about 9 years ago

  • translation missing: en.field_position deleted (14)
  • translation missing: en.field_position set to 678

#3 Updated by Sage Weil about 9 years ago

  • Target version changed from v0.30 to v0.31
  • translation missing: en.field_position deleted (677)
  • translation missing: en.field_position set to 3

#4 Updated by Sage Weil about 9 years ago

  • Target version changed from v0.31 to 12

#5 Updated by Sage Weil almost 9 years ago

  • Target version deleted (12)

#6 Updated by Sage Weil almost 9 years ago

  • Category set to OSD
  • Status changed from New to Resolved
  • Assignee set to Sage Weil
  • Target version set to v0.38

Also available in: Atom PDF