Project

General

Profile

Actions

Support #12085

open

data is no well-distributed among osds of host with staw algorithm

Added by chuanhong wang almost 9 years ago. Updated almost 9 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

environment: ceph0.87+centos7
problem: there are three host in my ceph cluster, and the crush algorithm of the host is straw. It seemds that data is not well-distributes in osds. For example, in host ceph2, osd.1 and osd.4 are full, but just 26% of osd.6 is used, %35 of osd.8 is used. my question is, what kind of algorithm is prefered by a host? Is four items too few in a 'straw' buckt? if so, how many items would be added at least in a 'straw' bucket?
There is the crush of my cluster in the attachment.

[root@ceph2 ~]# ceph -s
cluster cbc79ef9-fbc3-41ad-a726-47359f8d84b3
health HEALTH_ERR6 pgs backfill_toofull; 3 pgs inconsistent; 6 pgs stuck unclean; recovery 366/1516021 objects degraded (0.024%); 9296/1516021 objects misplaced (0.613%); 1 full osd(s); 3 scrub errors
monmap e7: 3 mons at {ceph1=192.168.200.246:6789/0,ceph2=192.168.200.247:6789/0,ceph3=192.168.200.242:6789/0}, election epoch 1312, quorum 0,1,2 ceph3,ceph1,ceph2
osdmap e14700: 12 osds: 12 up, 12 in
flags full
pgmap v3244880: 4608 pgs, 12 pools, 550 GB data, 491 kobjects
1703 GB used, 8000 GB / 9704 GB avail
366/1516021 objects degraded (0.024%); 9296/1516021 objects misplaced (0.613%)
4599 active+clean
3 active+clean+inconsistent
6 active+remapped+backfill_toofull
client io 15594 B/s rd, 21 op/s
[root@ceph2 ~]# ceph osd tree
  1. id weight type name up/down reweight
    -1 9.47 root default
    -3 1.1 host ceph2
    4 0.27 osd.4 up 1
    6 0.27 osd.6 up 1
    8 0.27 osd.8 up 1
    1 0.29 osd.1 up 1
    -4 1.1 host ceph3
    0 0.29 osd.0 up 1
    2 0.27 osd.2 up 1
    3 0.27 osd.3 up 1
    5 0.27 osd.5 up 1
    -2 7.27 host ceph1
    13 1.82 osd.13 up 1
    14 1.82 osd.14 up 1
    15 1.81 osd.15 up 1
    16 1.82 osd.16 up 1
    [root@ceph2 ceph]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg_sys-lv_root 97G 8.1G 84G 9% /
    devtmpfs 32G 0 32G 0% /dev
    tmpfs 32G 0 32G 0% /dev/shm
    tmpfs 32G 675M 31G 3% /run
    tmpfs 32G 0 32G 0% /sys/fs/cgroup
    /dev/sda1 380M 98M 258M 28% /boot
    /dev/sdc1 280G 280G 20K 100% /var/lib/ceph/osd/ceph-4
    /dev/sde1 280G 98G 182G 35% /var/lib/ceph/osd/ceph-8
    /dev/sdb1 293G 274G 20G 94% /var/lib/ceph/osd/ceph-1
    /dev/sdd1 280G 71G 209G 26% /var/lib/ceph/osd/ceph-6

Files

crush.txt (4.48 KB) crush.txt chuanhong wang, 06/19/2015 08:13 AM
Actions

Also available in: Atom PDF