Project

General

Profile

Actions

Feature #6227

open

make osd crush placement on startup handle multiple trees (e.g., ssd + sas)

Added by Jens-Christian Fischer over 10 years ago. Updated over 10 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Description

See our crush map for layout: basically, we have 64 OSD SATA drives in one hierarchy and 6 SSD drives in a 'ssd' root. The drives are on the same physical server (so for example 9 SATA, 1 SSD drive)

When I restart an OSD, it gets into the wrong root:

root@ineri ~$ service ceph -a restart osd.69 === osd.69 === === osd.69 ===
Stopping Ceph osd.69 on s0...kill 25718...done === osd.69 ===
2013-09-04 17:00:55.022990 7fb44b7ae700 0 -- :/1848 >> [2001:620:0:6::110]:6789/0 pipe(0x2f6f370 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
create-or-move updating item id 69 name 'osd.69' weight 0.47 at location {host=s0,root=default} to crush map
Starting Ceph osd.69 on s0...
starting osd.69 at :/0 osd_data /var/lib/ceph/osd/ceph-69 /dev/sda11

(the OSD is now on host s0, pool default instead of s0ssd / ssd)

I have to manually remove and set it to the correct position:

root@ineri ~$ ceph osd crush remove osd.69
removed item id 69 name 'osd.69' from crush map
root@ineri ~$ ceph osd crush set 69 0.5 root=ssd host=s0ssd
set item id 69 name 'osd.69' weight 0.5 at location {host=s0ssd,root=ssd} to crush map

root@ineri ~$ ceph --version
ceph version 0.61.5 (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979)


Files

issue-crush-map2.txt (6.42 KB) issue-crush-map2.txt Jens-Christian Fischer, 09/04/2013 08:02 AM
Actions

Also available in: Atom PDF