Project

General

Profile

Erasure coded storage backend (step 2) » History » Version 1

Jessica Mack, 06/09/2015 08:53 PM

1 1 Jessica Mack
h1. Erasure coded storage backend (step 2)
2
3
h3. Summary
4
5
See also the [[|transcript]] of the "session":https://www.youtube.com/watch?v=-K8bSHx7zJ0&t=30m00s and the associated "pad":http://pad.ceph.com/p/osd-erasure-coding.
6
As of "Dumpling":http://tracker.ceph.com/versions/160 Ceph keeps copies of objects to not loose them when hardware fails. It is an efficient solution for Ceph block storage but when performances are less important than cost, "erasure codes"http://en.wikipedia.org/wiki/Erasure_code achieves significant savings. For instance, instead of buying 3PB to store 1PB, it is enough to buy 1.5PB and have better resilience. Erasure coding an object works by splitting it in K data chunks ( let say 10 ) and computing M parity chunks ( let say 3 ). By storing each of the 13 chunks on different disks, it is possible to rebuild the original object even if 3 of them fail. It is better than the resilience provided by three copies and requires only half the space, but it comes at a price : it uses more CPU, bandwidth and is more complex to design.
7
The use cases of "Genomic data":http://www.annaisystems.com/, "backups":http://dachary.org/?p=2087 or others requiring cheap hardware, extremely resilient storage, written once or twice and rarely read are perfect matches for an erasure coded storage backend. Ceph was originaly designed to include "RAID4":https://github.com/ceph/ceph/blob/71531edd8a645a249f5edc984e5d8be85317abf7/src/osd/OSD.cc#L1356 as an alternative to replication and the work suspended during years was resumed after the [[Erasure encoding as a storage backend|first Ceph summit in May 2013.]]
8
9
h3. Owners
10
11
* Loic Dachary
12
* Samuel Just (Inktank)
13
14
h3. Current Status
15
16
* Scheduled for Firefly ( Feb 2014 )
17
* Loic Dachary full time developer
18
* Samuel Just specifications and code review
19
* Placement group code refactoring being worked on
20
* High level design being worked on with a PGBackend interface
21
* Erasure code API design complete
22
23
h3. Detailed Description
24
25
Design Doc: https://github.com/ceph/ceph/blob/wip-erasure-coding-doc/doc/dev/osd_internals/erasure_coding.rst
26
Currently wip PGBackend interface: https://github.com/ceph/ceph/blob/wip-erasure-coding-doc/src/osd/PGBackend.h
27
Developer notes : https://github.com/dachary/ceph/blob...asure-code.rst
28
Erasure coding or replication ( pg_pool_t::TYPE_REP ) is a property of a pool and is currently implemented PG.{cc,h} and ReplicatedPG.{cc,h}. The implementation of erasure coding requires a refactoring to allow multiple backends. The first step is to factor out code from ReplicatedPG that is independant of the resilience policy.
29
* PGLog is mostly done, it includes missing set as well as the logic to merge during recovery. It already is in master but does not yet have a sensible API.
30
* PGRegistry groups the ObjectContext, SnapSetContext and Watchers registry. The API is mostly good and the implementation is being cleaned up. The erasure code backend won't use the SnapSetContext because it is not supported. PG knows nothing about this registry. Once the #5510 pull request is merged, it will be ready to be moved to a PGBackend base class.
31
32
Other parts of the code need to be reworked so that ReplicatedPG only contains the part specific to replication. Some code needs to be moved to or from ReplicatedPG and PG. If moving the code is not enough, it will be refactored to isolate the parts that belong to ReplicatedPG only. The design document contains an inventory of what needs to be abstracted to PGBackend and the associated rationale.
33
Once the ReplicatedPG implementation for a given concept ( backfill, recovery, scrub, peering, read, write ) is reasonably free of code that is common to both erasure coding and replication, the PG implementation will be severed from the ReplicatedPG sub class and changed to use a PGBackend data member and will not be aware of the resilience policy.
34
The erasure code plugin library can be implemented with the following API and the jerasure library:
35
* erasure_coding_t *create_erasure_code_context(g_conf)
36
return an object configured to encode and decode according to a given algorithm and a given set of parameters as specified in g_conf. Parameters must be prefixed with erasure-code to avoid name collisions
37
ceph osd pool set-erasure-code <pool> m 10 ceph osd pool set-erasure-code <pool> k 3 ceph osd pool set-erasure-code <pool> algorithm Reed-Solomon 
38
39
The following are methods of erasure_coding_t.
40
* set<int> minimum_to_decode(const set<int> &want_to_read, const set<int> &available_chunks);
41
returns the smallest subset of available_chunks that needs to be retrieved in order to successfully decode want_to_read chunks.
42
* set<int> minimum_to_decode_with_cost(const set<int> &want_to_read, const map<int, int> &available)
43
returns the minimum cost set required to read the specified chunks given a mapping of available chunks to costs. The costs might allow to consider the difference between reading local chunks vs remote chunks.
44
* map<int, buffer> encode(const set<int> &want_to_encode, const buffer &in)
45
encode the content of in and return a map associating the chunk number with its encoded content. The map only contains the chunks contained in the want_to_encode set. For instance, in the simplest case M=2,K=1 for a buffer containing AB, calling
46
encode([1,2,3], 'AB') => { 1 => 'A', 2 => 'B', 3 => 'Z' } 
47
If only the parity chunk is of interest, calling
48
encode([3], 'AB') => { 3 => 'Z' } 
49
* map<int, buffer> decode(const set<int> &want_to_read, const map<int, buffer> &chunks)
50
decode chunks to read the content of the want_to_read chunks and return a map associating the chunk number with its decoded content. For instance, in the simplest case M=2,K=1 for an encoded payload of data A and B with parity Z, calling
51
decode([1,2], { 1 => 'A', 2 => 'B', 3 => 'Z' }) => { 1 => 'A', 2 => 'B' } 
52
If however, the chunk B is to be read but is missing it will be:
53
decode([2], { 1 => 'A', 3 => 'Z' }) => { 2 => 'B' } 
54
55
h3. Work items
56
57
h3. Coding tasks
58
59
The following list of tasks is either an issue item in the Ceph tracker or a a link to the erasure code design document in which links to the tracker can be found for individual tasks. All tasks can be displayed from the Erasure encoded placement group feature request.
60
# Common PG components refactoring:
61
## Factor out PG logs (difficulty: medium)
62
## Factor out ObjectContext SnapSetContext(difficulty: medium)
63
# Factor out object writing/replication logic
64
## Peering and PG logs(difficulty: hard)
65
## Distinguished acting set positions(difficulty: hard)
66
## Scrub(difficulty: hard)
67
## Recovery(difficulty: hard)
68
## Backfill(difficulty: hard)
69
# Plugable erasure coding library
70
## plugin mechanism and abstract API(difficulty: easy)
71
## jerasure plugin(difficulty: easy)
72
## erasure coding library plugin API documentation, including a simple example (difficulty: easy)
73
# Write integration tests https://github.com/ceph/ceph-qa-suite
74
75
h3. Build / release tasks
76
77
# Add a wip-erasure-code branch to http://github.com/ceph/ceph/
78
# Schedule the branch to be run by teuthology http://ceph.com/gitbuilder.cgi
79
# Watch the tests results on a daily basis and fix the problems
80
81
h3. Documentation tasks
82
83
# Erasure code internal documentation
84
# erasure code design document
85
# erasure coding library plugin API documentation, including a simple example