Project

General

Profile

Actions

Bug #39600

closed

CRUSH rule device classes mystery

Added by xie xingguo almost 5 years ago. Updated almost 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
OSDMap
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm playing around with CRUSH rules and device classes and I'm puzzled
if it's working correctly. Platform specifics: Ubuntu Bionic with Ceph 14.2.1

I created two new device classes "cheaphdd" and "fasthdd". I made
sure these device classes are applied to the right OSDs and that the
(shadow) crush rule is correctly filtering the right classes for the
OSDs (ceph osd crush tree --show-shadow).

I then created two new crush rules:

ceph osd crush rule create-replicated fastdisks default host fasthdd
ceph osd crush rule create-replicated cheapdisks default host cheaphdd

  1. rules
    rule replicated_rule {
    id 0
    type replicated
    min_size 1
    max_size 10
    step take default
    step chooseleaf firstn 0 type host
    step emit
    }
    rule fastdisks {
    id 1
    type replicated
    min_size 1
    max_size 10
    step take default class fasthdd
    step chooseleaf firstn 0 type host
    step emit
    }
    rule cheapdisks {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class cheaphdd
    step chooseleaf firstn 0 type host
    step emit
    }

After that I put the cephfs_metadata on the fastdisks CRUSH rule:

ceph osd pool set cephfs_metadata crush_rule fastdisks

Some data is moved to new osds, but strange enough there is still data on PGs
residing on OSDs in the "cheaphdd" class. I confirmed this with:

ceph pg ls-by-pool cephfs_data

Testing CRUSH rule nr. 1 gives me:

crushtool -i /tmp/crush_raw --test --show-mappings --rule 1 --min-x 1 --max-x 4 --num-rep 3
CRUSH rule 1 x 1 [0,3,6]
CRUSH rule 1 x 2 [3,6,0]
CRUSH rule 1 x 3 [0,6,3]
CRUSH rule 1 x 4 [0,6,3]

Which are indeed the OSDs in the fasthdd class.

Why is not all data moved to OSDs 0,3,6, but still spread on OSDs on the
cheaphhd class as well?

Thanks,

Stefan

Actions #1

Updated by xie xingguo almost 5 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF