Project

General

Profile

Actions

Bug #3785

closed

ceph: default crush rule does not suit multi-OSD deployments

Added by Ian Colle over 11 years ago. Updated over 11 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Spent time:
Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Version: 0.48.2-0ubuntu2~cloud0

Our Ceph deployments typically involve multiple OSDs per host with no disk redundancy. However the default crush rules appears to distribute by OSD, not by host, which I believe will not prevent replicas from landing on the same host.

I've been working around this by updating the crush rules as follows and installing the resulting crushmap in the cluster, but since we aim for fully automated deployment (using Juju and MaaS) this is suboptimal.

--- crushmap.txt 2013-01-10 20:33:21.265809301 0000
++ crushmap.new 2013-01-10 20:32:49.496745778 0000
@ -104,7 +104,7 @
min_size 1
max_size 10
step take default
- step choose firstn 0 type osd
step chooseleaf firstn 0 type host
step emit
}
rule metadata {
@ -113,7 +113,7 @
min_size 1
max_size 10
step take default
- step choose firstn 0 type osd
+ step chooseleaf firstn 0 type host
step emit
}
rule rbd {
@ -122,7 +122,7 @
min_size 1
max_size 10
step take default
- step choose firstn 0 type osd
+ step chooseleaf firstn 0 type host
step emit
}

https://bugs.launchpad.net/cloud-archive/+bug/1098320

Actions

Also available in: Atom PDF