Project

General

Profile

Adding a proprietary key value store to CEPH as a pluggable module » History » Version 1

Jessica Mack, 07/06/2015 08:54 PM

1 1 Jessica Mack
h1. Adding a proprietary key value store to CEPH as a pluggable module
2
3
h3. Summary
4
5
Current CEPH , key value store integration works with rocksdb, leveldb and kinetic drives. Adding a mechanisim to integrate any Key Value Store to be added as a CEPH OSD backend honoring the semantics of object store.
6
7
h3. Owners
8
9
* Varada Kari (Sandisk)
10
* Somnath Roy(Sandisk)
11
* Name
12
13
h3. Interested Parties
14
15
* Danny Al-Gaaf (Deutsche Telekom)
16
* Name (Affiliation)
17
* Name
18
19
h3. Current Status
20
 
21
h3. Detailed Description
22
23
The essence of the new scheme is as follows.
24
1. Implement a wrapper around the proprietary KVStore. Let us call it as KVExtension. This is a shared library which implements all interfaces required by CEPH KeyValueStore.
25
2. A new class is derived from KeyValueDB called PropDBStore, which honors the semantics of KeyvalueStore and KeyValueDB. This class acts as mediator between CEPH and KVExtension.  This class transforms bufferlist etc... to const char pointers or strings for the extension to understand.
26
3. PropDBStore, loads (dlopen) the KVExtension during OSD initialization.  Path to the KVExtension can be mentioned in ceph.conf.
27
4. Interfaces that needs to be implemented in KVExtension, which are imported by the PropDBStore are added in a new header called PropDBWrapper.h.  This header contains the signatures for the necessary interfaces like init(), close(), submit_transaction(), get() and get_iterator(). Similarly for Iterator functionality, PropDBIterator.h, which specifies the signatures of seek_to_first (), seek_to_last(), lower_bound() and upper_bound() etc...  PropDBStore includes these headers to import the symbols, using dlsym().
28
5. Choosing the proprietary DB as Backend to the OSD is controlled/managed by config options of the ceph (/etc/ceph/ceph.conf) like rocksdb or leveldb.
29
6. Rest of the existing functionality is not disturbed by this change. Changing the osd backend option will change backend implementation. But this change is not dynamic. The type of the backend should be chosen at osd creation time and osd will continue use that backend till that osd is reformatted again.
30
7. The new KVStore we are trying to integrate works on a raw partition, so we divided the osd drive into two partitions. One partition is given to osd Meta data (super block, fsid etc...), and the other is given to the new db to manage it. OSD partition is now not the entire disk, but 2-4GB which needed for the metadata.
31
 
32
want to extend the implementation to make the key value store as dynamically loadable module.
33
34
h3. Work items
35
36
h4. Coding tasks
37
38
# Implement the abstraction to dynamically load the key value db.
39
# Implement the changes needed in ceph-disk
40
41
h4. Build / release tasks
42
43
# Task 1
44
# Task 2
45
# Task 3
46
47
h4. Documentation tasks
48
49
# Task 1
50
# Task 2
51
# Task 3
52
53
h4. Deprecation tasks
54
55
# Task 1
56
# Task 2
57
# Task 3