Project

General

Profile

Actions

Feature #6656

open

Better disk/cluster handling

Added by Mark Nelson over 10 years ago. Updated over 10 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:

Description

Teuthology's ability to handle how disks in a node are used is limited. For performance testing, we need the ability to easily, from the YAML, define which hardware should be used for the OSDs. This will become more important as we push teuthology onto customer or partner clusters in a more ad-hoc fashion as well. Presumably this would be done through something like ceph-disk-prepare or ceph deploy.

Example teuthology disk configuration scenarios:

Fully-Automatic (existing behavior)

Requirements

Detect the devices in the system (exists)
Use homogeneous disk types for OSDs and Journals respectively. (new)
Optionally try to make it smarter (I think this could be a blackhole personally)

Benefits

No configuration necessary

Caveats

Intra-node topology may matter (scaling tests across multiple controllers)
Mixed hardware is problematic (multiple brands or classes of disks in the same system)
User may want to do something unexpected
Not appropriate when (re)testing an existing cluster topology at a customer site

Example Configs:

<no config needed>

Semi-Automatic

Requirements

Provide a compact procedure for end-user specification of osd and journal devices.

Benefits

Significant end-user control with minimal markup.
Create new partitions or deploy on a set of existing partitions.

Caveats

User needs to know which devices to use for OSDs and Journals
Does not provide total control over topology.

Example Configs:

6 disks + 2 ssds with 3 journals per SSD, skipping sda,sdf

osd_devs: “/dev/sd[b-c,g-j]1”

journal_devs: “/dev/sd[d,e][1,2,3]”

6 disks with journals on disk after data partition, skipping sda,sdd,sde,sdf:

osd_devs: /dev/sd[b-c,g-j]1”

journal_devs: “/dev/sd[b-c,g-j]2”

Manual

Requirements

Pass-through exact placement of data disks and journals.

Benefits

Total control over data and journal placement (specifically in relation to each other)

Caveats

User needs to know exactly what disks to use for what OSDs/Journals
Extremely verbose.

Example YAML:

OSD.0:

host: foo1
osd_data: /dev/sdb1
osd_journal: /dev/sdg1

OSD.1:

host: foo1
osd_data: /dev/sdc1
osd_journal: /dev/sdg2

OSD.2:

host: foo2
osd_data: /dev/sdb1
osd_journal: /dev/sdg1

...etc…


Subtasks 1 (1 open0 closed)

Subtask #6657: Put Journals on Block DevicesNew10/28/2013

Actions

No data to display

Actions

Also available in: Atom PDF