Project

General

Profile

Actions

1C - Erasure Encoding as a Storage Backend

Live Pad

The live pad can be found here: [pad]

Summit Snapshot

Erasure encoded placement group / pool

  • PG/ReplicatedPG API
    • The goal is not to factor out a base class from which an ErasureEncodedPG could be derived, it is to reverse engineer the PG API
    • PG/ReplicatedPG are really a single class although they grew from two different classes, back when RAID4 was to be implemented : the difference between the two gradually disapeared
    • Define an API ( IPG ? ) class for PG/ReplicatedPG
    • Change the code using PG/ReplicatedPG to use the API class rather than the actual PG/RepliatedPG classes
      • this may involve modifying the code of the calling classes to use accessors when data members are referenced
      • the callers are not otherwise modified, to minimize the change
      • it is assumed that the the API is defined by what is used and no attempt is made to improve
    • Tests are written for the API to cover 100% of the LOC and most of the expected functionalities implemented by PG/ReplicatedPG.
  • Factor reusable components out of PG/ReplicatedPG and have PG/ReplicatedPG and ErasureCodedPG share only those components and a common PG API.
    • Advantages:
      • We constrain the PG implementations less while still allowing reuse some of the common logic.
      • Individual components can be tested without needing to instantiate an entire PG.
      • We will realize benefits from better testing as each component is factored out independent of implementing ErasureCodedPG.
    • Some possible common components:
      • Peering State Machine: Currently, this is tightly coupled with the PG class. Instead, it becomes a seperate component responsible for orchestrating the peering process with a PG implementation via the PG interface. This would allow us to test specific behavior without creating an OSD or a PG.
      • ObjectContexts, object context tracking: this probably includes tracking read/write lock tracking for objects
      • Repop state?: not sure about this one, might be too different to generalize between ReplicatedPG and ErasureCodedPG
      • PG logs, PG missing: The logic for merging an authoritative PG log with another PG log while filling in the missing set would benefit massively from being testable seperately from a PG instance. It's possible that the stripes involved in ErasureCodedPG will make this impractical to generalize.
  • To isolates ceph from the actual library being used ( zfec, fecpp, ... ), a wrapper around the erasure encoding library is implemented. Each block is encoded into k data blocks and m parity blocks
    • encode(void* data, k, m) => void* data[k], void* parity[m]
    • decode(void* data[k], void* parity[m]) => void* data
    • repair(void* data[k], void* parity[m], indices_of_damaged_blocks[]) => void* data
  • The ErasureEncodePG configuration is set to encode each object into k data objects and m parity objects.
    • It use the parity ('INDEP') crush mode so that placement is intelligent. The indep placement avoids moving around a shard between ranks, because a mapping of [0,1,2,3,4] will change to [0,6,2,3,4] (or something) if osd.1 fails and the shards on 2,3,4 won't need to be copied around.
    • The ErasureEncodedPG uses k + m OSDs, numbered Do .. Dk-1 and C0 ... Cm-1
    • Each object is a strip
    • Each stripe has a fixed size of B bytes
  • ErasureEncodedPG implementation
    • Write offset, length
      • read the stripes containing offset, length
      • for each stripe, decode(void* data[k], void* parity[m]) => void* data and append to a bufferlist
      • modify the bufferlist with the write request
      • encode(void* data, k, m) => void* data[k], void* parity[m]
      • write data0 to Do, data1 to D1 ... data[k-1] to Dk-1 and parity0 to C0 ... parity[m-1] to Cm-1
    • Read offset, length
      • read the stripes containing offset
      • for each strip, decode(void* data[k], void* parity[m]) => void* data and append to a bufferlist
    • Object attributes
      • duplicate the object attributes on each OSD
    • Scrubbing
      • for each object, read each stripe and write back if a repair was necessary
    • Repair
      • when an OSD is decomissioned, when another OSD replaces it, for each object contained in a ErasureEncodedPG using this OSD, read the object, repair each stripe and write back the strip that resides on the new OSD
  • SJ - interface
    • Do we want to restrict the librados writes to just write full? For writes, write full can be implemented much more efficiently than partial writes (no need to read stripes).
    • xattr can probably be handled by simply replicating across stripes.
    • omap options:
      • disable
      • erasure code??
      • replicate across all stripes - good enough for applications using omap only for limited metadata
    • How do we handle object classes? A read might require a round trip to replicas to fulfill, we probably don't want to block in the object class code during that time. Perhaps we only allow reads from xattrs and omap entries from the object class?
  • SJ - random stuff
    • PG temp mappings need to be able to specify a primary independently of the acting set order (stripe assignment, really). This is necessary to handle backfilling a new acting0.
    • An osd might have two stripes of the same PG due to a history as below. This could be handled by allowing independent PG objects representing each stripe to coexist on the same OSD.
      • [0,3,6]
      • [1,3,6]
      • [9,3,0]
    • hobject_t and associated encodings/stringifications needs a stripe field
    • OSD map needs to track stripe as well as pg_t
    • split is straightforward -- yay
    • changing m,n is not easy
Use cases:
  1. write full object
  2. append to existing object?
  3. pluggable algorithm
  4. single-dc store (lower redundancy overhead)
  5. geo-distributed store (better durability)

Questions:

object stripe unit size.. per-object or per-pool? => may as well be per-object, maybe with a pool (or aglorithm) default?

Work items:

clean up OSD -> pg interface
factor out common PG pieces (obc tracking, pg log handling, etc.)
...
profit!

Updated by Jessica Mack almost 9 years ago · 1 revisions