Feature #1448
closed
Added by Sage Weil over 12 years ago.
Updated about 5 years ago.
Component(FS):
Hadoop/Java
Description
- set it up on some sepia nodes (8?)
- do some basic testing of ceph vs hdfs
from doug cutting:
>>> My suggestion is that you start with CDH then try to run the TeraSort
>>> benchmark program. I'd start by running it on a small cluster (1-5
>>> nodes) and, if that goes well, try moving up to a larger cluster if you have
>>> one available. Other benchmarks worth considering are TestDFSIO (a simple
>>> read or write throughput test) and GridMix (a blend of workloads).
it probably isn't necessary to tie this into teuthology right now. let's get a bit of experience spinning it up manually first.
- Translation missing: en.field_position set to 879
- Target version changed from v0.36 to v0.37
- Translation missing: en.field_position deleted (
880)
- Translation missing: en.field_position set to 1
- Target version deleted (
v0.37)
- Translation missing: en.field_position deleted (
11)
- Translation missing: en.field_position set to 5
The following benchmark, TestDFSIO, is for 12 OSDs, 1 MDS/MON. There is a single ext4 disk per node dedicated to Ceph. I'm not sure what to expect on write, but read looks like it's about line rate of the disk, with 4 data-local map tasks out of 10 total map tasks.
11/11/10 14:39:21 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
11/11/10 14:39:21 INFO fs.TestDFSIO: Date & time: Thu Nov 10 14:39:21 PST 2011
11/11/10 14:39:21 INFO fs.TestDFSIO: Number of files: 10
11/11/10 14:39:21 INFO fs.TestDFSIO: Total MBytes processed: 10000
11/11/10 14:39:21 INFO fs.TestDFSIO: Throughput mb/sec: 57.125556260104084
11/11/10 14:39:21 INFO fs.TestDFSIO: Average IO rate mb/sec: 62.511688232421875
11/11/10 14:39:21 INFO fs.TestDFSIO: IO rate std deviation: 21.708524264416006
11/11/10 14:39:21 INFO fs.TestDFSIO: Test exec time sec: 60.19
11/11/10 14:39:21 INFO fs.TestDFSIO:
11/11/10 14:37:20 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
11/11/10 14:37:20 INFO fs.TestDFSIO: Date & time: Thu Nov 10 14:37:20 PST 2011
11/11/10 14:37:20 INFO fs.TestDFSIO: Number of files: 10
11/11/10 14:37:20 INFO fs.TestDFSIO: Total MBytes processed: 10000
11/11/10 14:37:20 INFO fs.TestDFSIO: Throughput mb/sec: 5.975775401676683
11/11/10 14:37:20 INFO fs.TestDFSIO: Average IO rate mb/sec: 6.63668966293335
11/11/10 14:37:20 INFO fs.TestDFSIO: IO rate std deviation: 2.014126677978541
11/11/10 14:37:20 INFO fs.TestDFSIO: Test exec time sec: 380.263
11/11/10 14:37:20 INFO fs.TestDFSIO:
- Translation missing: en.field_position deleted (
16)
- Translation missing: en.field_position set to 29
- Project changed from Ceph to CephFS
- Category deleted (
20)
- Translation missing: en.field_position deleted (
396)
- Translation missing: en.field_position set to 5
- Target version set to v0.55d
- Translation missing: en.field_position deleted (
9)
- Translation missing: en.field_position set to 2
- Target version changed from v0.55d to v0.56
- Translation missing: en.field_position deleted (
6)
- Translation missing: en.field_position set to 1
- Status changed from New to In Progress
- Translation missing: en.field_position deleted (
3)
- Translation missing: en.field_position set to 1
- Target version deleted (
v0.56)
- Translation missing: en.field_position deleted (
7)
- Translation missing: en.field_position set to 22
Are nodes available for scale testing? Issdm cluster is withering away..
- Status changed from In Progress to Resolved
- Component(FS) Hadoop/Java added
- Category deleted (
48)
- Labels (FS) Java/Hadoop added
Also available in: Atom
PDF