Project

General

Profile

Towards Ceph Cold Storage » History » Version 1

Version 1/4 - Next » - Current version
Jessica Mack, 07/03/2015 10:05 PM


Towards Ceph Cold Storage

Summary

We'd like to continue the discussion about cold storage support with possible implementation ideas.

Owners

Interested Parties

Current Status

Detailed Description

We think the following four features are necessary to build a Ceph cold storage system. The user stories below describe how they relate to each other.

1. CRUSH: Energy Aware Buckets

First of all we need the power state of every OSD. Using this information we could teach CRUSH to:
  • Favor OSDs that are powered up
  • Actively allow OSDs to power down by assigning weights that prevent certain OSDs to get selected. For example in a time-based round robin fashion.

Example: With three buckets and two replicas. Two of the buckets are powered up; one is powered down. The placement algorithm only selects the two powered up buckets. After an hour in this configuration on of the up buckets switches to down. The down buckets becomes up. It may also be a good idea to switch the primary OSD state for PGs around to the powered on buckets.

2. Object Stubs - Links to external storage

Basically: objects without their data, but the information about where to retrieve their data.
Two features are necessary:
  1. Objects store references to external storage
  2. OSDs have a fetcher to retrieve external data
Clients must never access external storage directly. On access to externalized objects the OSDs fetcher retrieves and re-integrates the object back to the active storage pool.
Examples for external storage systems:
  • LONESTAR RAID
  • Ethernet drives
  • Tapes
  • Cloud Storage

3. Archiving daemon

Most HSM systems employ a data promotion and demotion daemon. A file could, for example, be demoted to slower storage after a certain time it wasn't accessed. Using Object Stubs described above this daemon could move cold data to the external archive system and create references.

4. Archive System OSDs

So far OSDs support mostly filesystem and key value stores as object storage backends. For archive storage it could be useful to add object stores directly interacting with an archive system.

User Stories:

Object Stubs, Archiving Daemon, External Archive System

Primary (warm) Ceph system serves clients directly. Archive system uses a different storage technology like LONESTAR. Archive daemon periodically scans through object metadata and moves /cold/ objects to the archive system. It replaces the former warm objects with stubs pointing to the location in the archive system. (There still remains the problem on how to effectively place data on
multiple archive systems)

Object Stubs, Archiving Daemon, Archive System OSDs​

The archive system uses Ceph. Ceph is configured to provide object placement but not redundancy. OSDs use a object store backend that handles energy efficiency and redundancy like the LONESTAR RAID.

Object Stubs, Archiving Daemon, Ceph Archive System

Use energy aware placement strategies. Ceph is configured to provide placement and redundancy.

Work items

Coding tasks

  1. Task 1
  2. Task 2
  3. Task 3

Build / release tasks

  1. Task 1
  2. Task 2
  3. Task 3

Documentation tasks

  1. Task 1
  2. Task 2
  3. Task 3

Deprecation tasks

  1. Task 1
  2. Task 2
  3. Task 3