Project

General

Profile

Towards Ceph Cold Storage » History » Revision 2

Revision 1 (Jessica Mack, 07/03/2015 10:05 PM) → Revision 2/4 (Jessica Mack, 07/03/2015 10:06 PM)

h1. Towards Ceph Cold Storage 

 h3. Summary 

 We'd like to continue the discussion about cold storage support with possible implementation ideas. 
 
 h3. Owners 

 * Matthias Grawinkel (Johannes Gutenberg-Universität Mainz, grawinkel@uni-mainz.de) 
 * Marcel Lauhoff (Universität Paderborn, Student, ml@irq0.org) 

 h3. Interested Parties 

 h3. Current Status 

 * Ideas 
 * Master's thesis topic 
 * Related Blueprint: https://wiki.ceph.com/Planning/Blueprints/%3CSIDEBOARD%3E/Cold_Storage_Pools 

 h3. Detailed Description 

 We think the following four features are necessary to build a Ceph cold storage system. The user stories below describe how they relate to each other. 

 h4. 1. CRUSH: Energy Aware Buckets 

 First of all we need the power state of every OSD. Using this information we could teach CRUSH to: 
 * Favor OSDs that are powered up 
 * Actively allow OSDs to power down by assigning weights that prevent certain OSDs to get selected. For example in a time-based round robin fashion. 
 
 Example: With three buckets and two replicas. Two of the buckets are powered up; one is powered down. The placement algorithm only selects the two powered up buckets. After an hour in this configuration on of the up buckets switches to down. The down buckets becomes up. It may also be a good idea to switch the primary OSD state for PGs around to the powered on buckets. 

 h4. 2. Object Stubs - Links to external storage 

 Basically: objects without their data, but the information about where to retrieve their data. 
 Two features are necessary: 
 # Objects store references to external storage 
 # OSDs have a fetcher to retrieve external data 

 Clients must never access external storage directly. On access to externalized objects the OSDs fetcher retrieves and re-integrates the object back to the active storage pool. 
 Examples for external storage systems: 
 * LONESTAR RAID 
 * Ethernet drives 
 * Tapes 
 * Cloud Storage 

 h4. 3. Archiving daemon 

 Most HSM systems employ a data promotion and demotion daemon. A file could, for example, be demoted to slower storage after a certain time it wasn't accessed. Using Object Stubs described above this daemon could move cold data to the external archive system and create references. 

 h4. 4. Archive System OSDs 

  So far OSDs support mostly filesystem and key value stores as object storage backends. For archive storage it could be useful to add object stores directly interacting with an archive system. 

 h4. User Stories: 

 h4. Object Stubs, Archiving Daemon, External Archive System 

 Primary (warm) Ceph system serves clients directly. Archive system uses a different storage technology like LONESTAR. Archive daemon periodically scans through object metadata and moves /cold/ objects to the archive system. It replaces the former warm objects with stubs pointing to the location in the archive system. (There still remains the problem on how to effectively place data on 
 multiple archive systems) 

 h4. Object Stubs, Archiving Daemon, Archive System OSDs? OSDs​ 

 The archive system uses Ceph. Ceph is configured to provide object placement but not redundancy. OSDs use a object store backend that handles energy efficiency and redundancy like the LONESTAR RAID. 

 h4. Object Stubs, Archiving Daemon, Ceph Archive System 

 Use energy aware placement strategies. Ceph is configured to provide placement and redundancy. 

 h3. Work items 

 h4. Coding tasks 

 # Task 1 
 # Task 2 
 # Task 3 

 h4. Build / release tasks 

 # Task 1 
 # Task 2 
 # Task 3 

 h4. Documentation tasks 

 # Task 1 
 # Task 2 
 # Task 3 

 h4. Deprecation tasks 

 # Task 1 
 # Task 2 
 # Task 3