Project

General

Profile

Adding a proprietary key value store to CEPH as a pluggable module

Summary

Current CEPH , key value store integration works with rocksdb, leveldb and kinetic drives. Adding a mechanisim to integrate any Key Value Store to be added as a CEPH OSD backend honoring the semantics of object store.

Owners

  • Varada Kari (Sandisk)
  • Somnath Roy(Sandisk)
  • Name

Interested Parties

  • Danny Al-Gaaf (Deutsche Telekom)
  • Name (Affiliation)
  • Name

Current Status

Detailed Description

The essence of the new scheme is as follows.
  1. Implement a wrapper around the proprietary KVStore. Let us call it as KVExtension. This is a shared library which implements all interfaces required by CEPH KeyValueStore.
  2. A new class is derived from KeyValueDB called PropDBStore, which honors the semantics of KeyvalueStore and KeyValueDB. This class acts as mediator between CEPH and KVExtension. This class transforms bufferlist etc... to const char pointers or strings for the extension to understand.
  3. PropDBStore, loads (dlopen) the KVExtension during OSD initialization. Path to the KVExtension can be mentioned in ceph.conf.
  4. Interfaces that needs to be implemented in KVExtension, which are imported by the PropDBStore are added in a new header called PropDBWrapper.h. This header contains the signatures for the necessary interfaces like init(), close(), submit_transaction(), get() and get_iterator(). Similarly for Iterator functionality, PropDBIterator.h, which specifies the signatures of seek_to_first (), seek_to_last(), lower_bound() and upper_bound() etc... PropDBStore includes these headers to import the symbols, using dlsym().
  5. Choosing the proprietary DB as Backend to the OSD is controlled/managed by config options of the ceph (/etc/ceph/ceph.conf) like rocksdb or leveldb.
  6. Rest of the existing functionality is not disturbed by this change. Changing the osd backend option will change backend implementation. But this change is not dynamic. The type of the backend should be chosen at osd creation time and osd will continue use that backend till that osd is reformatted again.
  7. The new KVStore we are trying to integrate works on a raw partition, so we divided the osd drive into two partitions. One partition is given to osd Meta data (super block, fsid etc...), and the other is given to the new db to manage it. OSD partition is now not the entire disk, but 2-4GB which needed for the metadata.

want to extend the implementation to make the key value store as dynamically loadable module.

Work items

Coding tasks

  1. Implement the abstraction to dynamically load the key value db.
  2. Implement the changes needed in ceph-disk

Build / release tasks

  1. Task 1
  2. Task 2
  3. Task 3

Documentation tasks

  1. Task 1
  2. Task 2
  3. Task 3

Deprecation tasks

  1. Task 1
  2. Task 2
  3. Task 3