Project

General

Profile

Actions

Bug #24264

closed

ssd-primary crush rule not working as intended

Added by Horace Ng almost 6 years ago. Updated almost 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
CRUSH
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I've set up the rule according to the doc, but some of the PGs are still being assigned to the same host though my failure domain is set to host.

my rule comes from http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/ with updated storage class.

rule ssd-primary {
ruleset 5
type replicated
min_size 5
max_size 10
step take ssd
step chooseleaf firstn 1 type host
step emit
step take platter
step chooseleaf firstn -1 type host
step emit
}

Crush tree:

[root@ceph0 ~]# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 58.63989 root default
-2 19.55095 host ceph0
0 hdd 2.73000 osd.0
1 hdd 2.73000 osd.1
2 hdd 2.73000 osd.2
3 hdd 2.73000 osd.3
12 hdd 4.54999 osd.12
15 hdd 3.71999 osd.15
18 ssd 0.20000 osd.18
19 ssd 0.16100 osd.19
-3 19.55095 host ceph1
4 hdd 2.73000 osd.4
5 hdd 2.73000 osd.5
6 hdd 2.73000 osd.6
7 hdd 2.73000 osd.7
13 hdd 4.54999 osd.13
16 hdd 3.71999 osd.16
20 ssd 0.16100 osd.20
21 ssd 0.20000 osd.21
-4 19.53799 host ceph2
8 hdd 2.73000 osd.8
9 hdd 2.73000 osd.9
10 hdd 2.73000 osd.10
11 hdd 2.73000 osd.11
14 hdd 3.71999 osd.14
17 hdd 4.54999 osd.17
22 ssd 0.18700 osd.22
23 ssd 0.16100 osd.23

#ceph pg ls-by-pool ssd-hybrid

27.8 1051 0 0 0 0 4399733760 1581 1581 active+clean 2018-05-23 06:20:56.088216 27957'185553 27959:368828 [23,1,11] 23 [23,1,11] 23 27953'182582 2018-05-23 06:20:56.088172 27843'162478 2018-05-20 18:28:20.118632

PG 27.8 has been assigned to osd.23 and osd.11, which located at the same host.

Actions #1

Updated by Horace Ng almost 6 years ago

Sorry, here's my updated rule instead of the one in the document.

rule ssd-primary {
id 2
type replicated
min_size 1
max_size 10
step take default class ssd
step chooseleaf firstn 1 type host
step emit
step take default class hdd
step chooseleaf firstn -1 type host
step emit
}

Actions #2

Updated by Greg Farnum almost 6 years ago

  • Project changed from Ceph to RADOS
  • Category deleted (OSDMap)
  • Component(RADOS) CRUSH added
Actions #3

Updated by Josh Durgin almost 6 years ago

  • Status changed from New to Closed

I don't think there's a good way to express that requirement in the current crush language. The rule in the docs does not work when hosts have both ssds and hdds in the same node. This is working as intended, so closing this.

Actions

Also available in: Atom PDF