Project

General

Profile

Bug #22486

ceph shows wrong MAX AVAIL with hybrid (chooseleaf firstn 1, chooseleaf firstn -1) CRUSH rule

Added by Patrick Fruh over 6 years ago. Updated about 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Administration/Usability
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
CRUSH
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I have the following configuration of OSDs:

ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 5.45599 1.00000 5587G 2259G 3327G 40.45 1.10 234
1 hdd 5.45599 1.00000 5587G 2295G 3291G 41.08 1.11 231
2 hdd 5.45599 1.00000 5587G 2321G 3265G 41.56 1.13 232
3 hdd 5.45599 1.00000 5587G 2313G 3273G 41.42 1.12 234
4 hdd 5.45599 1.00000 5587G 2105G 3481G 37.68 1.02 212
5 hdd 5.45599 1.00000 5587G 2231G 3355G 39.94 1.08 218
41 ssd 0.87299 1.00000 894G 9637M 884G 1.05 0.03 31
42 ssd 0.87299 1.00000 894G 13361M 881G 1.46 0.04 41
6 hdd 5.45599 1.00000 5587G 2404G 3182G 43.03 1.17 239
7 hdd 5.45599 1.00000 5587G 2226G 3360G 39.85 1.08 222
8 hdd 5.45599 1.00000 5587G 2668G 2918G 47.76 1.30 256
9 hdd 5.45599 1.00000 5587G 2366G 3220G 42.36 1.15 236
10 hdd 5.45599 1.00000 5587G 2454G 3132G 43.92 1.19 253
11 hdd 5.45599 1.00000 5587G 2405G 3181G 43.06 1.17 245
43 ssd 0.87299 1.00000 894G 15498M 879G 1.69 0.05 47
44 ssd 0.87299 1.00000 894G 10104M 884G 1.10 0.03 27
12 hdd 5.45599 1.00000 5587G 2242G 3344G 40.14 1.09 229
13 hdd 5.45599 1.00000 5587G 2551G 3035G 45.67 1.24 247
14 hdd 5.45599 1.00000 5587G 2513G 3074G 44.98 1.22 245
15 hdd 5.45599 1.00000 5587G 2014G 3572G 36.06 0.98 209
16 hdd 5.45599 1.00000 5587G 2586G 3000G 46.29 1.26 249
17 hdd 5.45599 1.00000 5587G 2459G 3127G 44.02 1.19 243
45 ssd 0.87299 1.00000 894G 9697M 884G 1.06 0.03 35
46 ssd 0.87299 1.00000 894G 12975M 881G 1.42 0.04 37
18 hdd 3.63699 1.00000 3724G 1595G 2128G 42.84 1.16 156
19 hdd 3.63699 1.00000 3724G 1387G 2336G 37.25 1.01 147
20 hdd 3.63699 1.00000 3724G 1551G 2172G 41.67 1.13 157
21 hdd 3.63699 1.00000 3724G 1535G 2189G 41.22 1.12 155
22 hdd 3.63699 1.00000 3724G 1459G 2264G 39.20 1.06 155
23 hdd 3.63699 1.00000 3724G 1395G 2329G 37.46 1.02 147
24 hdd 3.63699 1.00000 3724G 1489G 2234G 40.00 1.08 160
25 hdd 3.63699 1.00000 3724G 1634G 2090G 43.88 1.19 159
26 hdd 3.63699 1.00000 3724G 1566G 2157G 42.06 1.14 154
49 ssd 0.87299 1.00000 894G 9385M 884G 1.03 0.03 32
50 ssd 0.87299 1.00000 894G 12757M 881G 1.39 0.04 36
27 hdd 5.45599 1.00000 5587G 2462G 3124G 44.08 1.20 244
28 hdd 5.45599 1.00000 5587G 2314G 3272G 41.43 1.12 237
29 hdd 5.45599 1.00000 5587G 2166G 3420G 38.79 1.05 221
30 hdd 5.45599 1.00000 5587G 2484G 3102G 44.47 1.21 242
31 hdd 5.45599 1.00000 5587G 2292G 3294G 41.03 1.11 225
32 hdd 5.45599 1.00000 5587G 1982G 3604G 35.49 0.96 208
47 ssd 0.87299 1.00000 894G 12015M 882G 1.31 0.04 39
48 ssd 0.87299 1.00000 894G 14820M 879G 1.62 0.04 46
33 hdd 5.45599 1.00000 5587G 2002G 3584G 35.85 0.97 205
34 hdd 5.45599 1.00000 5587G 2069G 3517G 37.04 1.00 209
35 hdd 5.45599 1.00000 5587G 2187G 3399G 39.16 1.06 226
36 hdd 5.45599 1.00000 5587G 1821G 3765G 32.60 0.88 185
37 hdd 5.45599 1.00000 5587G 2123G 3463G 38.01 1.03 205
38 hdd 5.45599 1.00000 5587G 2197G 3390G 39.32 1.07 228
39 hdd 5.45599 1.00000 5587G 2180G 3406G 39.02 1.06 217
40 hdd 5.45599 1.00000 5587G 2232G 3354G 39.97 1.08 228
51 ssd 0.87320 1.00000 894G 14747M 879G 1.61 0.04 38
52 ssd 0.87320 1.00000 894G 7716M 886G 0.84 0.02 18
53 ssd 0.87320 1.00000 894G 12660M 881G 1.38 0.04 33
54 ssd 0.87320 1.00000 894G 11155M 883G 1.22 0.03 31
55 ssd 0.87320 1.00000 894G 9350M 885G 1.02 0.03 24
56 ssd 0.87320 1.00000 894G 13816M 880G 1.51 0.04 38
57 ssd 0.87320 1.00000 894G 10122M 884G 1.11 0.03 28
58 ssd 0.87320 1.00000 894G 10096M 884G 1.10 0.03 32
59 ssd 0.87320 1.00000 894G 13750M 880G 1.50 0.04 36
60 ssd 0.87320 1.00000 894G 16168M 878G 1.77 0.05 35
61 ssd 0.87320 1.00000 894G 11401M 883G 1.25 0.03 31
62 ssd 0.87320 1.00000 894G 12105M 882G 1.32 0.04 32
63 ssd 0.87320 1.00000 894G 13998M 880G 1.53 0.04 40
64 ssd 0.87320 1.00000 894G 17468M 877G 1.91 0.05 42
65 ssd 0.87320 1.00000 894G 9540M 884G 1.04 0.03 25
66 ssd 0.87320 1.00000 894G 15109M 879G 1.65 0.04 42
TOTAL 230T 86866G 145T 36.88
MIN/MAX VAR: 0.02/1.30 STDDEV: 22.48

With the following crush ruleset:
  1. begin crush map
    tunable choose_local_tries 0
    tunable choose_local_fallback_tries 0
    tunable choose_total_tries 50
    tunable chooseleaf_descend_once 1
    tunable chooseleaf_vary_r 1
    tunable chooseleaf_stable 1
    tunable straw_calc_version 1
    tunable allowed_bucket_algs 54
  1. devices
    device 0 osd.0 class hdd
    device 1 osd.1 class hdd
    device 2 osd.2 class hdd
    device 3 osd.3 class hdd
    device 4 osd.4 class hdd
    device 5 osd.5 class hdd
    device 6 osd.6 class hdd
    device 7 osd.7 class hdd
    device 8 osd.8 class hdd
    device 9 osd.9 class hdd
    device 10 osd.10 class hdd
    device 11 osd.11 class hdd
    device 12 osd.12 class hdd
    device 13 osd.13 class hdd
    device 14 osd.14 class hdd
    device 15 osd.15 class hdd
    device 16 osd.16 class hdd
    device 17 osd.17 class hdd
    device 18 osd.18 class hdd
    device 19 osd.19 class hdd
    device 20 osd.20 class hdd
    device 21 osd.21 class hdd
    device 22 osd.22 class hdd
    device 23 osd.23 class hdd
    device 24 osd.24 class hdd
    device 25 osd.25 class hdd
    device 26 osd.26 class hdd
    device 27 osd.27 class hdd
    device 28 osd.28 class hdd
    device 29 osd.29 class hdd
    device 30 osd.30 class hdd
    device 31 osd.31 class hdd
    device 32 osd.32 class hdd
    device 33 osd.33 class hdd
    device 34 osd.34 class hdd
    device 35 osd.35 class hdd
    device 36 osd.36 class hdd
    device 37 osd.37 class hdd
    device 38 osd.38 class hdd
    device 39 osd.39 class hdd
    device 40 osd.40 class hdd
    device 41 osd.41 class ssd
    device 42 osd.42 class ssd
    device 43 osd.43 class ssd
    device 44 osd.44 class ssd
    device 45 osd.45 class ssd
    device 46 osd.46 class ssd
    device 47 osd.47 class ssd
    device 48 osd.48 class ssd
    device 49 osd.49 class ssd
    device 50 osd.50 class ssd
    device 51 osd.51 class ssd
    device 52 osd.52 class ssd
    device 53 osd.53 class ssd
    device 54 osd.54 class ssd
    device 55 osd.55 class ssd
    device 56 osd.56 class ssd
    device 57 osd.57 class ssd
    device 58 osd.58 class ssd
    device 59 osd.59 class ssd
    device 60 osd.60 class ssd
    device 61 osd.61 class ssd
    device 62 osd.62 class ssd
    device 63 osd.63 class ssd
    device 64 osd.64 class ssd
    device 65 osd.65 class ssd
    device 66 osd.66 class ssd
  1. types
    type 0 osd
    type 1 host
    type 2 chassis
    type 3 rack
    type 4 row
    type 5 pdu
    type 6 pod
    type 7 room
    type 8 datacenter
    type 9 region
    type 10 root
  1. buckets
    host node1 {
    id -2 # do not change unnecessarily
    id -8 class hdd # do not change unnecessarily
    id -15 class ssd # do not change unnecessarily # weight 34.482
    alg straw2
    hash 0 # rjenkins1
    item osd.0 weight 5.456
    item osd.1 weight 5.456
    item osd.2 weight 5.456
    item osd.3 weight 5.456
    item osd.4 weight 5.456
    item osd.5 weight 5.456
    item osd.41 weight 0.873
    item osd.42 weight 0.873
    }
    host node2 {
    id -3 # do not change unnecessarily
    id -9 class hdd # do not change unnecessarily
    id -16 class ssd # do not change unnecessarily # weight 34.482
    alg straw2
    hash 0 # rjenkins1
    item osd.6 weight 5.456
    item osd.7 weight 5.456
    item osd.8 weight 5.456
    item osd.9 weight 5.456
    item osd.10 weight 5.456
    item osd.11 weight 5.456
    item osd.43 weight 0.873
    item osd.44 weight 0.873
    }
    host node3 {
    id -4 # do not change unnecessarily
    id -10 class hdd # do not change unnecessarily
    id -17 class ssd # do not change unnecessarily # weight 34.482
    alg straw2
    hash 0 # rjenkins1
    item osd.12 weight 5.456
    item osd.13 weight 5.456
    item osd.14 weight 5.456
    item osd.15 weight 5.456
    item osd.16 weight 5.456
    item osd.17 weight 5.456
    item osd.45 weight 0.873
    item osd.46 weight 0.873
    }
    host node4 {
    id -5 # do not change unnecessarily
    id -11 class hdd # do not change unnecessarily
    id -18 class ssd # do not change unnecessarily # weight 34.479
    alg straw2
    hash 0 # rjenkins1
    item osd.18 weight 3.637
    item osd.21 weight 3.637
    item osd.24 weight 3.637
    item osd.25 weight 3.637
    item osd.22 weight 3.637
    item osd.26 weight 3.637
    item osd.19 weight 3.637
    item osd.23 weight 3.637
    item osd.20 weight 3.637
    item osd.49 weight 0.873
    item osd.50 weight 0.873
    }
    host node5 {
    id -6 # do not change unnecessarily
    id -12 class hdd # do not change unnecessarily
    id -19 class ssd # do not change unnecessarily # weight 34.482
    alg straw2
    hash 0 # rjenkins1
    item osd.27 weight 5.456
    item osd.28 weight 5.456
    item osd.29 weight 5.456
    item osd.30 weight 5.456
    item osd.31 weight 5.456
    item osd.32 weight 5.456
    item osd.47 weight 0.873
    item osd.48 weight 0.873
    }
    host node6 {
    id -7 # do not change unnecessarily
    id -13 class hdd # do not change unnecessarily
    id -20 class ssd # do not change unnecessarily # weight 43.648
    alg straw2
    hash 0 # rjenkins1
    item osd.33 weight 5.456
    item osd.34 weight 5.456
    item osd.35 weight 5.456
    item osd.36 weight 5.456
    item osd.37 weight 5.456
    item osd.38 weight 5.456
    item osd.39 weight 5.456
    item osd.40 weight 5.456
    }
    host node7 {
    id -22 # do not change unnecessarily
    id -23 class hdd # do not change unnecessarily
    id -24 class ssd # do not change unnecessarily # weight 6.986
    alg straw2
    hash 0 # rjenkins1
    item osd.51 weight 0.873
    item osd.52 weight 0.873
    item osd.53 weight 0.873
    item osd.54 weight 0.873
    item osd.55 weight 0.873
    item osd.56 weight 0.873
    item osd.57 weight 0.873
    item osd.58 weight 0.873
    }
    host node8 {
    id -25 # do not change unnecessarily
    id -26 class hdd # do not change unnecessarily
    id -27 class ssd # do not change unnecessarily # weight 6.986
    alg straw2
    hash 0 # rjenkins1
    item osd.59 weight 0.873
    item osd.60 weight 0.873
    item osd.61 weight 0.873
    item osd.62 weight 0.873
    item osd.63 weight 0.873
    item osd.64 weight 0.873
    item osd.65 weight 0.873
    item osd.66 weight 0.873
    }
    root default {
    id -1 # do not change unnecessarily
    id -14 class hdd # do not change unnecessarily
    id -21 class ssd # do not change unnecessarily # weight 230.027
    alg straw2
    hash 0 # rjenkins1
    item node1 weight 34.482
    item node2 weight 34.482
    item node3 weight 34.482
    item node4 weight 34.479
    item node5 weight 34.482
    item node6 weight 43.649
    item node7 weight 6.986
    item node8 weight 6.986
    }
  1. rules
    rule replicated_ruleset {
    id 0
    type replicated
    min_size 1
    max_size 10
    step take default class hdd
    step chooseleaf firstn 0 type host
    step emit
    }
    rule hybrid {
    id 1
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 1 type host
    step emit
    step take default class hdd
    step chooseleaf firstn -1 type host
    step emit
    }
    rule flash {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 0 type host
    step emit
    }
  1. end crush map

And this is shown by ceph df, with the used CRUSH Rule added, all pools using a replication of 3:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
230T 145T 86866G 36.88
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
pool1 7 10454G 23.82 33430G 2871159 -> replicated_ruleset
pool2 8 13333G 28.51 33430G 3660994 -> replicated_ruleset
rbd 18 3729G 10.04 33430G 1021785 -> replicated_ruleset
cephfs_data 24 270G 3.62 7213G 77568 -> hybrid
cephfs_metadata 25 39781k 0 7213G 44 -> flash

I have 23244G in SSDs in total.
As you can see, the "replicated_ruleset" chooses only HDDs, the "flash" rule only SSDs and the "hybrid" rule uses one SSD (as primary, since HDDs primary affinity is set to 0.5, while the SSDs is on 1.0) and the rest as HDDs.

ceph df shows MAX AVAIL for the "replicated_ruleset" and "flash" rule correctly for my pools with replication 3. However the "hybrid" rule shows the available space as if it was using 3 SSDs, even though it’s only using one, even with a replication of 3.

Is this a display bug or is something wrong with my configuration? And if it’s a bug, will it break anything if MAX AVAIL or USED fills up to over 100?

History

#1 Updated by Patrick Fruh over 6 years ago

Forgot to put the output in code tags, sadly I can't edit the original, so here it is again to make it more readable:

OSDs:

ID CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE  VAR  PGS
0   hdd 5.45599  1.00000 5587G  2259G 3327G 40.45 1.10 234
1   hdd 5.45599  1.00000 5587G  2295G 3291G 41.08 1.11 231
2   hdd 5.45599  1.00000 5587G  2321G 3265G 41.56 1.13 232
3   hdd 5.45599  1.00000 5587G  2313G 3273G 41.42 1.12 234
4   hdd 5.45599  1.00000 5587G  2105G 3481G 37.68 1.02 212
5   hdd 5.45599  1.00000 5587G  2231G 3355G 39.94 1.08 218
41   ssd 0.87299  1.00000  894G  9637M  884G  1.05 0.03  31
42   ssd 0.87299  1.00000  894G 13361M  881G  1.46 0.04  41
6   hdd 5.45599  1.00000 5587G  2404G 3182G 43.03 1.17 239
7   hdd 5.45599  1.00000 5587G  2226G 3360G 39.85 1.08 222
8   hdd 5.45599  1.00000 5587G  2668G 2918G 47.76 1.30 256
9   hdd 5.45599  1.00000 5587G  2366G 3220G 42.36 1.15 236
10   hdd 5.45599  1.00000 5587G  2454G 3132G 43.92 1.19 253
11   hdd 5.45599  1.00000 5587G  2405G 3181G 43.06 1.17 245
43   ssd 0.87299  1.00000  894G 15498M  879G  1.69 0.05  47
44   ssd 0.87299  1.00000  894G 10104M  884G  1.10 0.03  27
12   hdd 5.45599  1.00000 5587G  2242G 3344G 40.14 1.09 229
13   hdd 5.45599  1.00000 5587G  2551G 3035G 45.67 1.24 247
14   hdd 5.45599  1.00000 5587G  2513G 3074G 44.98 1.22 245
15   hdd 5.45599  1.00000 5587G  2014G 3572G 36.06 0.98 209
16   hdd 5.45599  1.00000 5587G  2586G 3000G 46.29 1.26 249
17   hdd 5.45599  1.00000 5587G  2459G 3127G 44.02 1.19 243
45   ssd 0.87299  1.00000  894G  9697M  884G  1.06 0.03  35
46   ssd 0.87299  1.00000  894G 12975M  881G  1.42 0.04  37
18   hdd 3.63699  1.00000 3724G  1595G 2128G 42.84 1.16 156
19   hdd 3.63699  1.00000 3724G  1387G 2336G 37.25 1.01 147
20   hdd 3.63699  1.00000 3724G  1551G 2172G 41.67 1.13 157
21   hdd 3.63699  1.00000 3724G  1535G 2189G 41.22 1.12 155
22   hdd 3.63699  1.00000 3724G  1459G 2264G 39.20 1.06 155
23   hdd 3.63699  1.00000 3724G  1395G 2329G 37.46 1.02 147
24   hdd 3.63699  1.00000 3724G  1489G 2234G 40.00 1.08 160
25   hdd 3.63699  1.00000 3724G  1634G 2090G 43.88 1.19 159
26   hdd 3.63699  1.00000 3724G  1566G 2157G 42.06 1.14 154
49   ssd 0.87299  1.00000  894G  9385M  884G  1.03 0.03  32
50   ssd 0.87299  1.00000  894G 12757M  881G  1.39 0.04  36
27   hdd 5.45599  1.00000 5587G  2462G 3124G 44.08 1.20 244
28   hdd 5.45599  1.00000 5587G  2314G 3272G 41.43 1.12 237
29   hdd 5.45599  1.00000 5587G  2166G 3420G 38.79 1.05 221
30   hdd 5.45599  1.00000 5587G  2484G 3102G 44.47 1.21 242
31   hdd 5.45599  1.00000 5587G  2292G 3294G 41.03 1.11 225
32   hdd 5.45599  1.00000 5587G  1982G 3604G 35.49 0.96 208
47   ssd 0.87299  1.00000  894G 12015M  882G  1.31 0.04  39
48   ssd 0.87299  1.00000  894G 14820M  879G  1.62 0.04  46
33   hdd 5.45599  1.00000 5587G  2002G 3584G 35.85 0.97 205
34   hdd 5.45599  1.00000 5587G  2069G 3517G 37.04 1.00 209
35   hdd 5.45599  1.00000 5587G  2187G 3399G 39.16 1.06 226
36   hdd 5.45599  1.00000 5587G  1821G 3765G 32.60 0.88 185
37   hdd 5.45599  1.00000 5587G  2123G 3463G 38.01 1.03 205
38   hdd 5.45599  1.00000 5587G  2197G 3390G 39.32 1.07 228
39   hdd 5.45599  1.00000 5587G  2180G 3406G 39.02 1.06 217
40   hdd 5.45599  1.00000 5587G  2232G 3354G 39.97 1.08 228
51   ssd 0.87320  1.00000  894G 14747M  879G  1.61 0.04  38
52   ssd 0.87320  1.00000  894G  7716M  886G  0.84 0.02  18
53   ssd 0.87320  1.00000  894G 12660M  881G  1.38 0.04  33
54   ssd 0.87320  1.00000  894G 11155M  883G  1.22 0.03  31
55   ssd 0.87320  1.00000  894G  9350M  885G  1.02 0.03  24
56   ssd 0.87320  1.00000  894G 13816M  880G  1.51 0.04  38
57   ssd 0.87320  1.00000  894G 10122M  884G  1.11 0.03  28
58   ssd 0.87320  1.00000  894G 10096M  884G  1.10 0.03  32
59   ssd 0.87320  1.00000  894G 13750M  880G  1.50 0.04  36
60   ssd 0.87320  1.00000  894G 16168M  878G  1.77 0.05  35
61   ssd 0.87320  1.00000  894G 11401M  883G  1.25 0.03  31
62   ssd 0.87320  1.00000  894G 12105M  882G  1.32 0.04  32
63   ssd 0.87320  1.00000  894G 13998M  880G  1.53 0.04  40
64   ssd 0.87320  1.00000  894G 17468M  877G  1.91 0.05  42
65   ssd 0.87320  1.00000  894G  9540M  884G  1.04 0.03  25
66   ssd 0.87320  1.00000  894G 15109M  879G  1.65 0.04  42
                    TOTAL  230T 86866G  145T 36.88
MIN/MAX VAR: 0.02/1.30  STDDEV: 22.48

Crush Map:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class hdd
device 10 osd.10 class hdd
device 11 osd.11 class hdd
device 12 osd.12 class hdd
device 13 osd.13 class hdd
device 14 osd.14 class hdd
device 15 osd.15 class hdd
device 16 osd.16 class hdd
device 17 osd.17 class hdd
device 18 osd.18 class hdd
device 19 osd.19 class hdd
device 20 osd.20 class hdd
device 21 osd.21 class hdd
device 22 osd.22 class hdd
device 23 osd.23 class hdd
device 24 osd.24 class hdd
device 25 osd.25 class hdd
device 26 osd.26 class hdd
device 27 osd.27 class hdd
device 28 osd.28 class hdd
device 29 osd.29 class hdd
device 30 osd.30 class hdd
device 31 osd.31 class hdd
device 32 osd.32 class hdd
device 33 osd.33 class hdd
device 34 osd.34 class hdd
device 35 osd.35 class hdd
device 36 osd.36 class hdd
device 37 osd.37 class hdd
device 38 osd.38 class hdd
device 39 osd.39 class hdd
device 40 osd.40 class hdd
device 41 osd.41 class ssd
device 42 osd.42 class ssd
device 43 osd.43 class ssd
device 44 osd.44 class ssd
device 45 osd.45 class ssd
device 46 osd.46 class ssd
device 47 osd.47 class ssd
device 48 osd.48 class ssd
device 49 osd.49 class ssd
device 50 osd.50 class ssd
device 51 osd.51 class ssd
device 52 osd.52 class ssd
device 53 osd.53 class ssd
device 54 osd.54 class ssd
device 55 osd.55 class ssd
device 56 osd.56 class ssd
device 57 osd.57 class ssd
device 58 osd.58 class ssd
device 59 osd.59 class ssd
device 60 osd.60 class ssd
device 61 osd.61 class ssd
device 62 osd.62 class ssd
device 63 osd.63 class ssd
device 64 osd.64 class ssd
device 65 osd.65 class ssd
device 66 osd.66 class ssd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host node1 {
        id -2           # do not change unnecessarily
        id -8 class hdd         # do not change unnecessarily
        id -15 class ssd                # do not change unnecessarily
        # weight 34.482
        alg straw2
        hash 0  # rjenkins1
        item osd.0 weight 5.456
        item osd.1 weight 5.456
        item osd.2 weight 5.456
        item osd.3 weight 5.456
        item osd.4 weight 5.456
        item osd.5 weight 5.456
        item osd.41 weight 0.873
        item osd.42 weight 0.873
}
host node2 {
        id -3           # do not change unnecessarily
        id -9 class hdd         # do not change unnecessarily
        id -16 class ssd                # do not change unnecessarily
        # weight 34.482
        alg straw2
        hash 0  # rjenkins1
        item osd.6 weight 5.456
        item osd.7 weight 5.456
        item osd.8 weight 5.456
        item osd.9 weight 5.456
        item osd.10 weight 5.456
        item osd.11 weight 5.456
        item osd.43 weight 0.873
        item osd.44 weight 0.873
}
host node3 {
        id -4           # do not change unnecessarily
        id -10 class hdd                # do not change unnecessarily
        id -17 class ssd                # do not change unnecessarily
        # weight 34.482
        alg straw2
        hash 0  # rjenkins1
        item osd.12 weight 5.456
        item osd.13 weight 5.456
        item osd.14 weight 5.456
        item osd.15 weight 5.456
        item osd.16 weight 5.456
        item osd.17 weight 5.456
        item osd.45 weight 0.873
        item osd.46 weight 0.873
}
host node4 {
        id -5           # do not change unnecessarily
        id -11 class hdd                # do not change unnecessarily
        id -18 class ssd                # do not change unnecessarily
        # weight 34.479
        alg straw2
        hash 0  # rjenkins1
        item osd.18 weight 3.637
        item osd.21 weight 3.637
        item osd.24 weight 3.637
        item osd.25 weight 3.637
        item osd.22 weight 3.637
        item osd.26 weight 3.637
        item osd.19 weight 3.637
        item osd.23 weight 3.637
        item osd.20 weight 3.637
        item osd.49 weight 0.873
        item osd.50 weight 0.873
}
host node5 {
        id -6           # do not change unnecessarily
        id -12 class hdd                # do not change unnecessarily
        id -19 class ssd                # do not change unnecessarily
        # weight 34.482
        alg straw2
        hash 0  # rjenkins1
        item osd.27 weight 5.456
        item osd.28 weight 5.456
        item osd.29 weight 5.456
        item osd.30 weight 5.456
        item osd.31 weight 5.456
        item osd.32 weight 5.456
        item osd.47 weight 0.873
        item osd.48 weight 0.873
}
host node6 {
        id -7           # do not change unnecessarily
        id -13 class hdd                # do not change unnecessarily
        id -20 class ssd                # do not change unnecessarily
        # weight 43.648
        alg straw2
        hash 0  # rjenkins1
        item osd.33 weight 5.456
        item osd.34 weight 5.456
        item osd.35 weight 5.456
        item osd.36 weight 5.456
        item osd.37 weight 5.456
        item osd.38 weight 5.456
        item osd.39 weight 5.456
        item osd.40 weight 5.456
}
host node7 {
        id -22          # do not change unnecessarily
        id -23 class hdd                # do not change unnecessarily
        id -24 class ssd                # do not change unnecessarily
        # weight 6.986
        alg straw2
        hash 0  # rjenkins1
        item osd.51 weight 0.873
        item osd.52 weight 0.873
        item osd.53 weight 0.873
        item osd.54 weight 0.873
        item osd.55 weight 0.873
        item osd.56 weight 0.873
        item osd.57 weight 0.873
        item osd.58 weight 0.873
}
host node8 {
        id -25          # do not change unnecessarily
        id -26 class hdd                # do not change unnecessarily
        id -27 class ssd                # do not change unnecessarily
        # weight 6.986
        alg straw2
        hash 0  # rjenkins1
        item osd.59 weight 0.873
        item osd.60 weight 0.873
        item osd.61 weight 0.873
        item osd.62 weight 0.873
        item osd.63 weight 0.873
        item osd.64 weight 0.873
        item osd.65 weight 0.873
        item osd.66 weight 0.873
}
root default {
        id -1           # do not change unnecessarily
        id -14 class hdd                # do not change unnecessarily
        id -21 class ssd                # do not change unnecessarily
        # weight 230.027
        alg straw2
        hash 0  # rjenkins1
        item node1 weight 34.482
        item node2 weight 34.482
        item node3 weight 34.482
        item node4 weight 34.479
        item node5 weight 34.482
        item node6 weight 43.649
        item node7 weight 6.986
        item node8 weight 6.986
}

# rules
rule replicated_ruleset {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step chooseleaf firstn 0 type host
        step emit
}
rule hybrid {
        id 1
        type replicated
        min_size 1
        max_size 10
        step take default class ssd
        step chooseleaf firstn 1 type host
        step emit
        step take default class hdd
        step chooseleaf firstn -1 type host
        step emit
}
rule flash {
        id 2
        type replicated
        min_size 1
        max_size 10
        step take default class ssd
        step chooseleaf firstn 0 type host
        step emit
}

# end crush map

ceph df:

GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    230T      145T       86866G         36.88
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    pool1                7      10454G    23.82        33430G     2871159 -> replicated_ruleset
    pool2                8      13333G    28.51        33430G     3660994 -> replicated_ruleset
    rbd                 18      3729G     10.04        33430G     1021785 -> replicated_ruleset
    cephfs_data         24       270G      3.62         7213G       77568 -> hybrid
    cephfs_metadata     25     39781k         0         7213G          44 -> flash

#2 Updated by Greg Farnum about 6 years ago

  • Project changed from Ceph to RADOS
  • Category set to Administration/Usability
  • Component(RADOS) CRUSH added

Well, the hybrid ruleset isn't giving you as much host isolation as you're probably thinking, since it can select an SSD and a hard drive from the same node.

Not sure if the MAX_AVAIL is just a display issue or what — my guess is it can't handle the multi-part rule you've created.

Also available in: Atom PDF