Project

General

Profile

Overview » History » Version 3

Jessica Mack, 06/22/2015 05:04 AM

1 1 Jessica Mack
h1. Overview
2
3
Blueprints should be used to document proposed features and architectures or to store canonical specifications for active work as listed in the "Ceph Tracker":http://tracker.ceph.com/.
4
Your blueprint should be an active document that will be maintained throughout the development process with the eventual goal of becoming the canonical guide on how the feature works. Suitable blueprints will be discussed at the next available Ceph Developer Summit.
5
 
6
Next Summit:  03-04 March ("Infernalis")
7
 
8
If a call for blueprints isn't open you can always submit a general proposal:
9
10
(Generate Blueprints button)
11
 
12
13
h3. Blueprint Process
14
15
The blueprint process is as follows:
16
17
# *Create Blueprint*: Someone with a great idea writes it up in a blueprint.  Early-stage blueprints may not contain lots of detail, but should be enough information to capture the idea and gather interested contributors.  The creator of a blueprint will usually become the owner of that blueprint, or should ensure that an owner is identified.
18
# *Blueprint Review*: In advance of the Ceph Developer Summit, Sage and the community team review the submitted blueprints and select the ones that will be discussed during sessions.
19
# *Ceph Developer Summit*: During the summit, interested parties will discuss the possible architectural approaches for the blueprint, determine the necessary work items, and begin to identify owners for them.  Sessions will be moderated by the blueprint owner, who is responsible for coordinating the efforts of those involved and providing regular updates to the community.
20
# *Feature Freeze*: During or (ideally) prior to the feature freeze, Sage will review the completed work and approve its inclusion in the release.
21
22
h3. Current Blueprints
23
24
* [[-Sideboard-]]
25
** [[annotate config option]]
26
** [[annotate perfcounters]]
27
** [[CephFS - file creation and object-level backtraces]]
28
** [[CephFS - Security & multiple instances in a single RADOS Cluster]]
29
** [[Cephfs encryption support]]
30
** [[CMake]]
31
** [[Cold Storage Pools]]
32
** [[osd - tiering - object redirects]]
33
** [[Strong AuthN and AuthZ for CephFS]]
34
* [[Dumpling]]
35
** [[A hook framework for Ceph FS operation]]
36
** [[Better Swift Compatability for Radosgw]]
37
** [[Ceph Management API]]
38
** [[ceph stats and monitoring tools]]
39
** [[Continuous OSD Stress Testing and Analysis]]
40
** [[Enforced bucket-level quotas in the Ceph Object Gateway]]
41
** [[Erasure encoding as a storage backend]]
42
** [[extend crush rule language]]
43
** [[Fallocate/Hole Punching Support for Ceph]]
44
** [[Fix memory leaks]]
45
** [[Inline data support for Ceph]]
46
** [[RADOS Gateway refactor into library, internal APIs]]
47
** [[rados namespaces]]
48
** [[RADOS Object Temperature Monitoring]]
49
** [[rados redirects]]
50
** [[RGW Geo-Replication and Disaster Recovery]]
51
** [[Scalability, Stress and Portability test]]
52
** [[zero-copy bufferlists]]
53 2 Jessica Mack
* [[Emperor]]
54
** [[Add LevelDB support to ceph cluster backend store]]
55
** [[Add new feature - Write Once Read Many volume]]
56
** [[Erasure coded storage backend (step 2)]]
57
** [[Increasing Ceph portability]]
58
** [[Kernel client read ahead optimization]]
59
** [[librados/objecter - smarter localized reads]]
60
** [[librgw]]
61
** [[mds - Inline data support (Step 2)]]
62
** [[msgr - implement infiniband support via rsockets]]
63
** [[osd - ceph on zfs]]
64
** [[osd - tiering - cache pool overlay]]
65
** [[rbd - cloud management platform features]]
66
** [[rgw - bucket level quota]]
67
** [[rgw - Multi-region / Disaster Recovery (phase 2)]]
68
** [[rgw support for swift temp url]]
69 3 Jessica Mack
** [[Specify File layout by kernel client and fuse client]]
70
* [[Firefly]]
71
** [[Ceph-Brag]]
72
** [[ceph-deploy]]
73
** [[Cephfs quota support]]
74
** [[Ceph CLI Experience]]
75
** [[Ceph deployment - ceph-deploy, puppet, chef, salt, ansible...]]
76
** [[Ceph Infrastructure]]
77
** [[Erasure coded storage backend (step 3)]]
78
** [[Object striping in librados]]
79
** [[osd - new key/value backend]]
80
** [[osdmap - primary role affinity]]
81
** [[PowerDNS backend for RGW]]
82
** [[rados cache pool (part 2)]]
83
** [[Test Automation]]
84 1 Jessica Mack