Project

General

Profile

Actions

Support #675

closed

hot to adjust replicate level,disk not the same size

Added by longguang yue over 13 years ago. Updated over 13 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

some disk is 3T,while some disks less 50G,
who can tell me the meaning of crush file.
  1. types
    type 0 device
    type 1 domain
    type 2 pool
  1. buckets
    domain root {
    id -1 # do not change unnecessarily
    alg straw
    hash 0 # rjenkins1
    item device0 weight 1.000
    item device1 weight 1.000
    item device2 weight 1.000
    }
  1. rules
    rule data {
    ruleset 0
    type replicated
    min_size 1
    max_size 10
    step take root
    step choose firstn 0 type device
    step emit
    }
Actions #1

Updated by Sage Weil over 13 years ago

  • Status changed from New to Closed

The crush weight should be proportional to the disk size, or node throughput, whichever you prefer. It depends on if your goal is to fill the disks or to maximize throughput. Either way, the weight controls the amount of data (and thus IO) each osd gets.

Actions #2

Updated by longguang yue over 13 years ago

wo do not know what you mean,can you take above as example, 50G,and 3T,how to adjust device weight????
i try several times,but it does not work,
i adjust one device 0,but it still store data,
i adjust 1:1000,but the proportion is not 1:1000, so .......i do not know how.
----
2. meanwhile i do not know how to debug ceph,
http://ceph.newdream.net/wiki/Debugging,
i want to open all-all debug options,how to configure cepg.conf?????

Actions #3

Updated by longguang yue over 13 years ago

and at highest level debug

Actions

Also available in: Atom PDF