Project

General

Profile

Overview » History » Version 4

Jessica Mack, 08/06/2015 06:06 PM

1 1 Jessica Mack
h1. Overview
2
3
Blueprints should be used to document proposed features and architectures or to store canonical specifications for active work as listed in the "Ceph Tracker":http://tracker.ceph.com/.
4
Your blueprint should be an active document that will be maintained throughout the development process with the eventual goal of becoming the canonical guide on how the feature works. Suitable blueprints will be discussed at the next available Ceph Developer Summit.
5
 
6
Next Summit:  03-04 March ("Infernalis")
7
 
8
If a call for blueprints isn't open you can always submit a general proposal:
9
10
(Generate Blueprints button)
11
 
12
13
h3. Blueprint Process
14
15
The blueprint process is as follows:
16
17
# *Create Blueprint*: Someone with a great idea writes it up in a blueprint.  Early-stage blueprints may not contain lots of detail, but should be enough information to capture the idea and gather interested contributors.  The creator of a blueprint will usually become the owner of that blueprint, or should ensure that an owner is identified.
18
# *Blueprint Review*: In advance of the Ceph Developer Summit, Sage and the community team review the submitted blueprints and select the ones that will be discussed during sessions.
19
# *Ceph Developer Summit*: During the summit, interested parties will discuss the possible architectural approaches for the blueprint, determine the necessary work items, and begin to identify owners for them.  Sessions will be moderated by the blueprint owner, who is responsible for coordinating the efforts of those involved and providing regular updates to the community.
20
# *Feature Freeze*: During or (ideally) prior to the feature freeze, Sage will review the completed work and approve its inclusion in the release.
21
22
h3. Current Blueprints
23
24
* [[-Sideboard-]]
25
** [[annotate config option]]
26
** [[annotate perfcounters]]
27
** [[CephFS - file creation and object-level backtraces]]
28
** [[CephFS - Security & multiple instances in a single RADOS Cluster]]
29
** [[Cephfs encryption support]]
30
** [[CMake]]
31
** [[Cold Storage Pools]]
32
** [[osd - tiering - object redirects]]
33
** [[Strong AuthN and AuthZ for CephFS]]
34
* [[Dumpling]]
35
** [[A hook framework for Ceph FS operation]]
36
** [[Better Swift Compatability for Radosgw]]
37
** [[Ceph Management API]]
38
** [[ceph stats and monitoring tools]]
39
** [[Continuous OSD Stress Testing and Analysis]]
40
** [[Enforced bucket-level quotas in the Ceph Object Gateway]]
41
** [[Erasure encoding as a storage backend]]
42
** [[extend crush rule language]]
43
** [[Fallocate/Hole Punching Support for Ceph]]
44
** [[Fix memory leaks]]
45
** [[Inline data support for Ceph]]
46
** [[RADOS Gateway refactor into library, internal APIs]]
47
** [[rados namespaces]]
48
** [[RADOS Object Temperature Monitoring]]
49
** [[rados redirects]]
50
** [[RGW Geo-Replication and Disaster Recovery]]
51
** [[Scalability, Stress and Portability test]]
52
** [[zero-copy bufferlists]]
53 2 Jessica Mack
* [[Emperor]]
54
** [[Add LevelDB support to ceph cluster backend store]]
55
** [[Add new feature - Write Once Read Many volume]]
56
** [[Erasure coded storage backend (step 2)]]
57
** [[Increasing Ceph portability]]
58
** [[Kernel client read ahead optimization]]
59
** [[librados/objecter - smarter localized reads]]
60
** [[librgw]]
61
** [[mds - Inline data support (Step 2)]]
62
** [[msgr - implement infiniband support via rsockets]]
63
** [[osd - ceph on zfs]]
64
** [[osd - tiering - cache pool overlay]]
65
** [[rbd - cloud management platform features]]
66
** [[rgw - bucket level quota]]
67
** [[rgw - Multi-region / Disaster Recovery (phase 2)]]
68
** [[rgw support for swift temp url]]
69 3 Jessica Mack
** [[Specify File layout by kernel client and fuse client]]
70
* [[Firefly]]
71
** [[Ceph-Brag]]
72
** [[ceph-deploy]]
73
** [[Cephfs quota support]]
74
** [[Ceph CLI Experience]]
75
** [[Ceph deployment - ceph-deploy, puppet, chef, salt, ansible...]]
76
** [[Ceph Infrastructure]]
77
** [[Erasure coded storage backend (step 3)]]
78
** [[Object striping in librados]]
79
** [[osd - new key/value backend]]
80
** [[osdmap - primary role affinity]]
81
** [[PowerDNS backend for RGW]]
82
** [[rados cache pool (part 2)]]
83
** [[Test Automation]]
84 4 Jessica Mack
* [[Giant]]
85
** [[Add CRUSH management to calamari API]]
86
** [[Add QoS capacity to librbd]]
87
** [[Add Systemtap/Dtrace static markers]]
88
** [[calamari - localization infrastructure, Chinese version]]
89
** [[-Ceph deployment - ceph-deploy, puppet, chef, salt, ansible...]]
90
** [[crush extension for more flexible object placement]]
91
** [[Diagnosability]]
92
** [[librados/objecter - improve threading]]
93
** [[Librados/Objecter trace capture and replay]]
94
** [[mon - dispatch messages while waiting for IO to complete]]
95
** [[mon - Independently dispatch non-conflicting messages]]
96
** [[mon - PaxosServices relying on hooks instead of hardcoded order to update/propose]]
97
** [[mon - Prioritize messages]]
98
** [[Mongoose / Civetweb frontend for RGW]]
99
** [[osd - create backend for seagate kinetic]]
100
** [[osd - Locally repairable code]]
101
** [[osd - tiering - new cache modes]]
102
** [[Pyramid Erasure Code]]
103
** [[-rbd - copy-on-read for clones]]
104
** [[rbd - Database performance]]
105
** [[Reference counter for protected snapshots]]
106
** [[rgw - compound object (phase 1)]]
107
** [[rgw - If-Match on user-defined metadata]]
108
** [[Wiki IA Overhaul]]
109
* [[Hammer]]
110
** [[Accelio RDMA Messenger]]
111
** [[Calamari localization]]
112
** [[Calamari RESTful API]]
113
** [[CephFS - Forward Scrub]]
114
** [[CephFS - Hadoop Support]]
115
** [[CephFS quota support discussion]]
116
** [[Ceph Security hardening]]
117
** [[Clustered SCSI target using RBD]]
118
** [[Diff - integrity local import]]
119
** [[Fixed memory layout for Message/Op passing]]
120
** [[How to make Ceph enterprise ready]]
121
** [[kerberos authn, AD authn/authz]]
122
** [[librados - expose checksums]]
123
** [[librados - support parallel reads]]
124
** [[librbd - shared flag, object map]]
125
** [[monitor - reweight near full osd autonomicly]]
126
** [[OSD - add flexible cache control of object data]]
127
** [[osd - opportunistic whole-object checksums]]
128
** [[osd - prepopulate pg temp]]
129
** [[osd - Scrub/SnapTrim IO prioritization]]
130
** [[osd - tiering - fine-grained promotion unit]]
131
** [[osd - tiering - reduce read/write latencies on cache tier miss]]
132
** [[osd - update Transaction encoding]]
133
** [[quotas vs subtrees]]
134
** [[rados - improve ex-/import functionality]]
135
** [[rbd - Copy-on-read for clones in kernel rbd client]]
136
** [[RBD - Mirroring]]
137
** [[rgw - bucket index scalability]]
138
** [[rgw - object versioning]]
139
** [[rgw - Snapshots]]
140
** [[Shingled Erasure Code (SHEC)]]
141
** [[Towards Ceph Cold Storage]]
142
* [[Infernalis]]
143
** [[Accelio xio integration with kernel RBD client for RDMA support]]
144
** [[Adding a proprietary key value store to CEPH as a pluggable module]]
145
** [[Add Metadata Mechanism To LibRBD]]
146
** [[A standard framework for Ceph performance profiling with latency breakdown]]
147
** [[cache tier improvements - hitsets, proxy write]]
148
** [[Calamari - How to implement high-level stories in an intelligent API]]
149
** [[cephfs - multitenancy features]]
150
** [[Ceph Governance]]
151
** [[-Ceph User Committee-|Ceph User Committee]]
152
** [[Clustered SCSI target using RBD Status]]
153
** [[Continue Ceph/Docker integration work]]
154
** [[Dynamic data relocation for cache tiering]]
155
** [[export rbd diff between clone and parent]]
156
** [[Generic support for plugins installation and upgrade]]
157
** [[Improve tail latency]]
158
** [[LMDB key/value backend for Ceph]]
159
** [[NewStore (new osd backend)]]
160
** [[OpenStack@Ceph]]
161
** [[openstack manila and ceph]]
162
** [[osd - erasure coding pool overwrite support]]
163
** [[osd - Faster Peering]]
164
** [[osd - Less intrusive scrub]]
165
** [[osd - rados io hints improvements]]
166
** [[osd - Scrub and Repair]]
167
** [[osd - simple ceph-mon dm-crypt key management]]
168
** [[osd - Tiering II (Warm->Cold)]]
169
** [[osd - Transactions]]
170
** [[rbd - kernel rbd client supports copy-on-read]]
171
** [[RBD Async Mirroring]]
172
** [[RGW - Active/Active Arch]]
173
** [[rgw - Hadoop FileSystem Interface for a RADOS Gateway Caching Tier]]
174
** [[RGW - Multitenancy]]
175
** [[RGW - NFS]]
176
** [[Romana - calamari-clients repo gets a new name]]
177
* [[Jewel]]
178
** [[add IOhint in CephFS]]
179
** [[Cache Tiering - Improve efficiency of read-miss operations]]
180
** [[Rados cache tier promotion queue and throttling]]
181
** [[Ceph-mesos]]
182
** [[cephfs - separate purge queue from MDCache]]
183
** [[Hadoop over Ceph RGW status update]]
184
** [[Improvement on the cache tiering eviction]]
185
** [[Messenger_-_priorities_for_Client|Messenger - priorities for Client]]
186
** [[Ceph 0 day for performance regression]]
187
** [[Optimize Newstore for massive small object storage]]
188
** [[rados - metadata-only journal mode]]
189
** [[rados - multi-object transaction support]]
190
** [[Tiering-enhacement]]
191
** [[rbd journal]]
192
** [[krbd exclusive locking]]
193
** [[rados qos]]
194
** [[scrub repair]]
195
** [[tail latency improvements]]
196
** [[peering speed improvements]]
197
** [[sloppy reads]]
198
** [[passive monitors]]
199
** [[calamari/api/hardware/storage]]
200
** [[CephFS Starter Tasks]]
201
** [[CephFS fsck Progress & Design]]
202
** [[let's make Calamari easier to troubleshoot]]
203
** [[testing - non-functional tests]]
204
** [[Security - CephX brute-force protection through auto-blacklisting]]
205
** [[PMStore - new OSD backend]]
206
** [[rgw new multisite sync]]
207
** [[rgw new multisite configuration]]
208
** [[rgw multi-tenancy]]
209
** [[systemd, non-root, selinux/apparmor]]