Erasure coded storage backend (step 2)


See also the [[|transcript]] of the session and the associated pad.
As of Dumpling Ceph keeps copies of objects to not loose them when hardware fails. It is an efficient solution for Ceph block storage but when performances are less important than cost, "erasure codes" achieves significant savings. For instance, instead of buying 3PB to store 1PB, it is enough to buy 1.5PB and have better resilience. Erasure coding an object works by splitting it in K data chunks ( let say 10 ) and computing M parity chunks ( let say 3 ). By storing each of the 13 chunks on different disks, it is possible to rebuild the original object even if 3 of them fail. It is better than the resilience provided by three copies and requires only half the space, but it comes at a price : it uses more CPU, bandwidth and is more complex to design.
The use cases of Genomic data, backups or others requiring cheap hardware, extremely resilient storage, written once or twice and rarely read are perfect matches for an erasure coded storage backend. Ceph was originaly designed to include RAID4 as an alternative to replication and the work suspended during years was resumed after the first Ceph summit in May 2013.


  • Loic Dachary
  • Samuel Just (Inktank)

Current Status

Detailed Description

Design Doc:
Currently wip PGBackend interface:
Developer notes :
Erasure coding or replication ( pg_pool_t::TYPE_REP ) is a property of a pool and is currently implemented PG.{cc,h} and ReplicatedPG.{cc,h}. The implementation of erasure coding requires a refactoring to allow multiple backends. The first step is to factor out code from ReplicatedPG that is independant of the resilience policy.
  • PGLog is mostly done, it includes missing set as well as the logic to merge during recovery. It already is in master but does not yet have a sensible API.
  • PGRegistry groups the ObjectContext, SnapSetContext and Watchers registry. The API is mostly good and the implementation is being cleaned up. The erasure code backend won't use the SnapSetContext because it is not supported. PG knows nothing about this registry. Once the #5510 pull request is merged, it will be ready to be moved to a PGBackend base class.
Other parts of the code need to be reworked so that ReplicatedPG only contains the part specific to replication. Some code needs to be moved to or from ReplicatedPG and PG. If moving the code is not enough, it will be refactored to isolate the parts that belong to ReplicatedPG only. The design document contains an inventory of what needs to be abstracted to PGBackend and the associated rationale.
Once the ReplicatedPG implementation for a given concept ( backfill, recovery, scrub, peering, read, write ) is reasonably free of code that is common to both erasure coding and replication, the PG implementation will be severed from the ReplicatedPG sub class and changed to use a PGBackend data member and will not be aware of the resilience policy.
The erasure code plugin library can be implemented with the following API and the jerasure library:
  • erasure_coding_t * create_erasure_code_context(g_conf)
    return an object configured to encode and decode according to a given algorithm and a given set of parameters as specified in g_conf. Parameters must be prefixed with erasure-code to avoid name collisions
    ceph osd pool set-erasure-code <pool> m 10 ceph osd pool set-erasure-code <pool> k 3 ceph osd pool set-erasure-code <pool> algorithm Reed-Solomon
The following are methods of erasure_coding_t.
  • set<int> minimum_to_decode(const set<int> &want_to_read, const set<int> &available_chunks);
    returns the smallest subset of available_chunks that needs to be retrieved in order to successfully decode want_to_read chunks.
  • set<int> minimum_to_decode_with_cost(const set<int> &want_to_read, const map<int, int> &available)
    returns the minimum cost set required to read the specified chunks given a mapping of available chunks to costs. The costs might allow to consider the difference between reading local chunks vs remote chunks.
  • map<int, buffer> encode(const set<int> &want_to_encode, const buffer &in)
    encode the content of in and return a map associating the chunk number with its encoded content. The map only contains the chunks contained in the want_to_encode set. For instance, in the simplest case M=2,K=1 for a buffer containing AB, calling
    encode([1,2,3], 'AB') => { 1 => 'A', 2 => 'B', 3 => 'Z' }

    If only the parity chunk is of interest, calling
    encode([3], 'AB') => { 3 => 'Z' }
  • map<int, buffer> decode(const set<int> &want_to_read, const map<int, buffer> &chunks)
    decode chunks to read the content of the want_to_read chunks and return a map associating the chunk number with its decoded content. For instance, in the simplest case M=2,K=1 for an encoded payload of data A and B with parity Z, calling
    decode([1,2], { 1 => 'A', 2 => 'B', 3 => 'Z' }) => { 1 => 'A', 2 => 'B' }

    If however, the chunk B is to be read but is missing it will be:
    decode([2], { 1 => 'A', 3 => 'Z' }) => { 2 => 'B' }

Work items

Coding tasks

The following list of tasks is either an issue item in the Ceph tracker or a a link to the erasure code design document in which links to the tracker can be found for individual tasks. All tasks can be displayed from the Erasure encoded placement group feature request.
  1. Common PG components refactoring:
    1. Factor out PG logs (difficulty: medium)
    2. Factor out ObjectContext SnapSetContext (difficulty: medium)
  2. Factor out object writing/replication logic
    1. Peering and PG logs (difficulty: hard)
    2. Distinguished acting set positions (difficulty: hard)
    3. Scrub (difficulty: hard)
    4. Recovery (difficulty: hard)
    5. Backfill (difficulty: hard)
  3. Plugable erasure coding library
    1. plugin mechanism and abstract API (difficulty: easy)
    2. jerasure plugin (difficulty: easy)
    3. erasure coding library plugin API documentation, including a simple example (difficulty: easy)
  4. Write integration tests

Build / release tasks

  1. Add a wip-erasure-code branch to
  2. Schedule the branch to be run by teuthology
  3. Watch the tests results on a daily basis and fix the problems

Documentation tasks

  1. Erasure code internal documentation
  2. erasure code design document
  3. erasure coding library plugin API documentation, including a simple example