Project

General

Profile

Ceph User Committee meeting 2014-05-02 » History » Version 1

Jessica Mack, 05/25/2015 11:08 PM

1 1 Jessica Mack
h1. Ceph User Committee meeting 2014-05-02
2
3
h3. Executive summary
4
 
5
The agenda was:
6
* Elections
7
* RedHat and Inktank
8
* CephFS
9
* Meetings
10
 
11
Action: The Ceph User Committee will express the need for a Ceph Foundation, from the user perspective
12
Action: Patrick McGarry will organize a RedHat acquisition meeting in two weeks time
13
 
14
Note: Patrick McGarry participated in the meeting and answered questions, as can be read from the logs below. The executive summary  focuses on the points raised by users instead of attempting to summarize the dialog.
15
16
17
h3. Elections
18
19
It will happen this month, details here : https://wiki.ceph.com/Community/2014-04_Ceph_User_Committee_Elections
20
21
h3. RedHat and Inktank
22
23
Positive: stewardship of other projects seem fine.
24
Concern: support for non RHEL OS
25
Positive: better support for Fedora
26
Concern: the project is sold to RedHat, engineers and trademark, the people who were in charge now answer to someone else
27
Hope: inifiniband support boost from RedHat
28
Positive: RedHat has experience with maintaining software production ready and supporting customers
29
Concern: what does it mean from the point of view of Inktank customers
30
Positive: greater potential for development gains between ceph and kvm
31
Concern: the [[Foundation|Ceph foundation]] becomes more necessary than ever, to establish a diverse governance, will RedHat agree to it ?
32
Concern: Sage had the skills and was empowered to be the Ceph benevolent dictator for life. The skill remains but he has less power over the project.
33
Clarification: Inktank acquisition by RedHat should not be confused with MySQL acquisition by Oracle. The copyright is intentionaly fragmented and cannot be sold.
34
Feedback: Cloudwatt management reacted positively to the acquisition.
35
Positive: Calamari will be published under a Free Software license
36
Confusion: what does it mean for GlusterFS, really ? Features ? Sale points ? Development roadmap ? What Inktank products / services, training will remain ? etc.
37
Concern: can Ceph, as a software, be reasonably independant from the service provider side of Inktank / RedHat ?
38
Concern: who coordinates the development, roadmap, feature list ? The Ceph Foundation or RedHat ?
39
Concern: users must apply pressure for the Ceph Foundation to happen, RedHat has little incentive to agree to it spontaneously.
40
Action: we, users, should express our desire for a Ceph Foundation, with testimonials collected from various people
41
Concern: should things go bad and a fork becomes necessary, all the driving forces of the project are currently under Inktank / RedHat influence
42
Comparison: Qumranet acquisition is perceived to have been beneficial to KVM
43
Concern: some Gluster, Inc customers were unsatisfied after the RedHat acquisition, could it happen to Inktank customers also ?
44
Action: Patrick McGarry will organize a RedHat acquisition meeting in two weeks time
45
46
h3. CephFS
47
48
Use case: project to replace a 40TB cluster used to host mirrors for distributions (binary data packages, iso, tarballs), delivering more than 1Gb/s and less than 4Gb/s
49
* http://dmsimard.com/wp-content/uploa...or_logical.jpg
50
* suggestions to use object instead of CephFS, with adhoc software
51
* if these "webservers"  were openvz containers, having its datastore on cephfs
52
* if front servers were openvz containers, they could be HA-managed
53
* the only blocking factor is CephFS not being production ready: mostly the active-active MDS scenario and the dynamic subtree partioning that was unstable
54
* the plan is to deploy with puppet
55
* deduplication would help
56
57
Use case: OVH.com does something with Ceph (CephFS ?) : https://twitter.com/olesovhcom/statu...82909729763328
58
59
Use case: French academic community meeting discussed how CephFS could be used (no record) http://www.capitoul.org/ProgrammeReunion20140424
60
61
h3. Meetups
62
 
63
64
All meetups https://wiki.ceph.com/Community/Meetups
65
May 13th, Atlanta : http://openstacksummitmay2014atlanta...e#.U2PrHuZdW6w
66
May 11th, Atlanta : http://www.meetup.com/Ceph-in-Atlanta/
67
May 9th, Berlin : http://www.meetup.com/Ceph-Berlin/events/179186672/
68
69
70
h3. Log
71
72
<pre>
73
<loicd> Welcome to the Ceph User Committee meeting #2 ! https://wiki.ceph.com/Community/Meetings#Proposed_topics:
74
<loicd> scuttlemonkey: will join a little late
75
<Vacum> Hi :)
76
<janos_> ooh #2!
77
<loicd> ahah
78
<janos_> do we have to wear red-colored hats?
79
<janos_> ;)
80
<loicd> I propose we get the easy stuff / boring things out of the way first and give a chance to the people who are late to join
81
<loicd> janos_: not yet I hope
82
<janos_> lol
83
<scuttlemonkey> I'm here! :)
84
<loicd> First topic : the elections     
85
<loicd> as promised I'll send a mail later today to setup the elections of the Ceph User Committee head
86
<loicd> it will be interesting in the new context ;-)
87
<loicd> I will apply and mourgaya proposed to apply too
88
<loicd> is there anyone else interested ?
89
-*- janos_ keeps his hand down
90
<loicd> the idea is that you spend ~8 hours a week on average caring for the ceph user committee. It's not much but still ;-)
91
<loicd> mourgaya: are you still on board for this ?
92
<mourgaya> yes!
93
<loicd> cool
94
<loicd> now to more fun things
95
<loicd> redhat and inktank, what do people think ?
96
-*- loicd tend to be a kill joy and won't speak first ;-)
97
<janos_> i think it's good. RH's stewardship of other projects seems to have been good
98
<janos_> not overbearing
99
<lesserevil> re: inktank->redhat: optimistically cautious
100
<Serbitar> my conern as others have raised is the ability to get support for non rhel OS
101
<janos_> plus i think now i'll get the f20 builds i've been dreaming of
102
<janos_> ;)
103
<kevincox> I think that it will be good for the project.
104
<janos_> yeah i can understand the support concerns for other distro's
105
<scuttlemonkey> I know I'm inside the beast, but I think it's a good move.  However, it may be a bit of a paradigm shift in 
106
long-term planning for things like foundation
107
<loicd> Serbitar: do we know how much time redhat actually invests in supporting kvm for other os (for instance) ?
108
<Vacum> a bit surprising. during the Frankfurt Ceph Day, the general statement from Inktank and Ceph was "we won't sell out".
109
 at least it sounded like it
110
<pressureman> i hope that infiniband support will get a boost from redhat
111
<Serbitar> loicd: i do not
112
<scuttlemonkey> fwiw I know sage is working very hard to ensure that support for non-rhel setups is strong
113
<loicd> Vacum: it can be percieved as a sell out indeed.
114
<mourgaya> ceph can have  the benefits of redhat  landing production!
115
<scuttlemonkey> and for now inktank is still selling and supporting ubuntu/suse
116
<Vacum> I'm a bit concerned about the transistion peroid, also from a commercial support view
117
<loicd> Vacum: how do you mean ?
118
<Vacum> scuttlemonkey: "for now" isn't really something a business can rely on when it comes to setting up petabyte storage
119
<scuttlemonkey> Vacum: I absolutely agree
120
<scuttlemonkey> it's just hard to answer definitively as the "how do all the bits and pieces get merged" discussions are still ongoing
121
<scuttlemonkey> all I can give you is intent
122
<loicd> scuttlemonkey: this is reassuring and there does not seem to be a risk that other platforms support is dropped any time soon. 
123
I think people express a concern in the long term.
124
<janos_> i'm excited about the greater potential for development gains between ceph and kvm
125
<loicd> janos_: I did not think about that, you're correct !
126
<Vacum> loicd: we have a 12 month pre-production support that will run another 8 months. and we were planning on contracting the 
127
enterprise support. now its totally open if such a thing will be available in the (near) future - and to which conditions
128
<scuttlemonkey> vacum: you're ubuntu I'm assuming?
129
<Vacum> scuttlemonkey: debian
130
<scuttlemonkey> ahh
131
<loicd> Vacum: could you spell the name of your company for the record ? Unless it's confidential of course ;-)
132
<Vacum> loicd: I can spell it per PM :)
133
<loicd> now that redhat is there, the foundation becomes more necessary than ever
134
<Vacum> I do see a plus on the whole thing from a commercial perspective though. RH does have a long history in providing enterprise 
135
support and they know it all. inktank can benefit from that
136
<loicd> in the past, as a user, I felt confident that Sage could be a benevolent dictator in the broad sense of the term, not just 
137
technical. Now that redhat is involved, there needs to be a diverse governance of some kind.
138
<mourgaya> is redhat keeping the inktank support team, and there reactivity?
139
<loicd> Vacum: +1
140
<scuttlemonkey> loicd: that's my take as well, but there is mixed opinion from the folks involved....so I'm looking forward to the discussions
141
<amichel> Question about the repositories for ubuntu/debian. I'm doing a deploy on 14.04 trusty and the ceph-extras repo doesn't seem to have trusty packages. Is ceph-extras not needed on 14.04 or is there a trick I'm missing?
142
<Vacum> loicd: I totally agree. "Ceph" as a brand (and I didn't use trademark on purpose!) should not fall into the hand of a company
143
<scuttlemonkey> mourgaya: Inktank is remaining intact as an org until we can ensure that the transition wont change support response
144
<loicd> mourgaya: we don't know. But as a group of users I think we should see the broader consequences. In a few years from now, if all 
145
goes well, ceph will be some kind of kvm. Widely spread and adopted. Is this what we would like ? Would our use cases be satisfied by such an outcome ?
146
<loicd> amichel: we're having a meeting (ceph user committee). Do you mind if we postpone the answer for another 45 minutes ?
147
<amichel> No problem at all, I didn't realize
148
<Vacum> from a brand (and even trademark) perspective, look at MySQL. They sold to Sun, which was kind of cool. and now its at Oracle...
149
<nobody18188181> Vacum: But now MariaDB is taking over ;)
150
<loicd> dmsimard: what  does iweb think of this move ?
151
<loicd> Vacum: MySQL copyright was sold to oracle. That cannot happen with Ceph.
152
<Vacum> nobody18188181: yes, Maria brings a lot of new stuff (no wonder, coming from Monty). but just look at the channel's activities...
153
<Vacum> loicd: the trademark was sold too
154
<loicd> the copyright is intentionaly fragmented
155
<mourgaya> good!
156
<nobody18188181> What do they do in their channel?
157
<dmsimard> I can speak for myself, not quite on behalf of iWeb as is - I'm happy for Inktank and that Calamari will be open sourced.
158
 I am really curious as to what will happen with Gluster since, to me, Ceph is a natural competitor to gluster.
159
<loicd> it was a wise decision.
160
<janos_> i don't imagine much will happen to gluster
161
<loicd> dmsimard: +1
162
<Vacum> nobody18188181: I mean IRC channels. compare the activity of both
163
<janos_> RH will likely be happy selling support for both
164
<mourgaya> radhat is now  the leader of the futur of storage :-)
165
<nobody18188181> Ah, I havent been to the maria or mysql channels so I cant speak on them
166
<dmsimard> I am not super familiar with Gluster but does it do things that Ceph does not do ?
167
<dmsimard> That's kind of where I am getting at
168
<loicd> My company ( Cloudwatt ) has reacted positively to the announcement.
169
<loicd> The marketing director came to me and showed the planned cooperation with RedHat. He said : "we'll add a Ceph line there". And that was it.
170
<Serbitar> dmsimard: i guess it would be that cepgh has more functionality than gluster, with block object and file stores vs glusters' file store
171
<Serbitar> evne though csphfs isnt commercially supported yet
172
<loicd> Is anyone around here actually using gluster ?
173
<dmsimard> So from a Redhat perspective, do you continue to develop both ? Do you focus your efforts on Ceph ? This is what I am curious to see how it plays out.
174
<nobody18188181> loicd: Per recommendation of a friend I'm trying to use ceph (gluster was going to be my first choice); but if i cant get ceph working then I'm going to have to try that.
175
<Vacum> I'm a bit on the cautious side. On the RH announcement they are talking a lot about Inktank's "products". Do they mean the services with that. Or Ceph itself?
176
<loicd> nobody18188181: I see
177
<nobody18188181> loicd: I chose ceph because a good friend of mine indicated to me that ceph is vastly superior in performance compared to gluster; so of course that part wins me over.
178
<scuttlemonkey> Vacum: the Inktank "product" is "Inktank Ceph Enterprise"...which is Calamari + Support/Services...there is also training/consulting
179
<loicd> We can only speculate and hope for the best. In terms of timeframe, I bet we'll know where we stand in a year or so.
180
<Vacum> also, during Frankfurt Ceph Day, Sage talked about keeping Ceph as the product/solution and Inktank as service provider seperate. Is that even possible with RH?
181
<loicd> Vacum: with a foundation it is.
182
<lesserevil> loicd: +1
183
<Vacum> loicd: But then, who is the "coordinator" of Ceph's development? The Foundation, or RH?
184
<mo-> as somebody that's been trying to tell people that Ceph is worth a look (or two), I find it a BIG plus to be able to add that it is a RH supported solution now
185
<Vacum> Who, authoritively, will decide what goes in as a feature and what not?
186
<loicd> Will RedHat agree to a foundation holding the trademark, I would not be on it. But that depends on us (users) and the developers from the community.
187
<janos_> mo- good point when you have to make that pitch
188
<scuttlemonkey> Vacum: the idea would be foundation as a central clearinghouse for development...but each contributing org would have their own plans/roadmap (including RH)
189
<scuttlemonkey> if such a foundation were to occur, Sage would still be BDFL and decide what goes in, and how
190
<Vacum> so the "upstream" of everything would be The Foundation
191
<scuttlemonkey> yeah
192
<Vacum> that would be nice
193
<scuttlemonkey> that's my hope
194
<Vacum> mine too :)
195
<nobody18188181> ok i found a bug how can i report it quickly?
196
<scuttlemonkey> all depends on how RH sees the future
197
<loicd> Vacum: sage must be the benevolent dictator for life. At least I believe it's necessary because of the Ceph dymanic. A personal opinion based on observation and betting that what happened in the past will work in the future ;-)
198
<loicd> If I was RedHat I would not allow the creation of a foundation. Unless there is significant pressure from the community.
199
<Vacum> exactly
200
<scuttlemonkey> yeah
201
<scuttlemonkey> and to be fair there are a number of great single-vendor FOSS projects
202
<loicd> I propose that we voice, loud and clear, what we would like to see in a foundation. And why we think it is necessary.
203
<scuttlemonkey> so it'll be an interesting discussion at least :)
204
<loicd> scuttlemonkey: right :-)
205
<scuttlemonkey> please do
206
<Vacum> +1
207
<scuttlemonkey> I have spent several months thinking about a foundation
208
<xarses> should we create a petition?
209
<scuttlemonkey> so I'd love to have new information injected into those thoughts
210
<scuttlemonkey> xarses: not necessary...Sage and I are already on the path
211
<loicd> https://wiki.ceph.com/Development/Foundation has your ideas right ?
212
<scuttlemonkey> you could contribute to the wiki doc though
213
<mourgaya> foundation does not depend of redhat, ceph is an open source solution  right?
214
<scuttlemonkey> loicd: only the very highest level brush strokes...but yes
215
<scuttlemonkey> mourgaya: the point of the foundation would be to hold the trademarks in trust for the community
216
<loicd> xarses: something that looks like a petition without the controversial tone would be nice
217
<janos_> an assertion
218
<scuttlemonkey> without Red Hat's donation of those marks the foundation really can't happen
219
<scuttlemonkey> loicd: xarses: I propose we just create an "interested parties" section on the foundation doc
220
<scuttlemonkey> for those who are interested in seeing it happen
221
<loicd> mourgaya: the dynamic of the project depends on redhat now. And a fork would be most difficult. The idea of a foundation is to make such a fork unecessary, forever, because all interests are represented.
222
<Vacum> a fork wouldn't have much chances. too much happening in the code at the moment. only if Sage and a few other key devs would create that fork themselves, it would stand a chance
223
<Vacum> see Maria...
224
<scuttlemonkey> hehe
225
<loicd> scuttlemonkey: having a document where people can freely express their thoughts, even if not polished, would be useful
226
<scuttlemonkey> loicd: so you're thinking a "talk" page in addition to the brainstorm doc?
227
<loicd> right
228
<fghaas> um, can I just inject one thought here since scuttlemonkey asked for it: is anyone under the impression that RHT fucked up KVM, post-Qumranet acquisition?
229
<loicd> xarses: is this something like this you had in mind ?
230
<janos_> fghaas, not that i can tell
231
<xarses> loicd: something like that
232
<mourgaya> how can we have  redhat position about a ceph foundation?
233
<loicd> fghaas: Inktank can't compare to Qumranet because they had a proprietary software base to begin with. Intkank is a Free Software shop and this is a significant difference.
234
<scuttlemonkey> mourgaya: I will be sharing that info as we start the discussions
235
<xdeller> fghaas: absolutely not, except some directions, like state replication, was abandoned
236
<fghaas> loicd, KVM was always free software. The *management products* around KVM were not. Ceph is free software, Calamari is not. I maintain there's significantly less difference than you think. And yes, RHT would have us believe that RHEV-M is The best Thing Since Sliced Bread™ for a few years, but then OpenStack set them straight
237
<loicd> fghaas: we can debate this historical thing later ;-)
238
<loicd> Should we move to more technical topics or someone has more to say about the redhat acquisition ?
239
<Vacum> its a bit early for outsider to have more insight to talk in-depth about it :)
240
<loicd> true ;-)
241
<fghaas> so I'm with janos_ and xdeller here; I think RHT has been a fine steward of KVM, and if they follow *that* precedent then the Ceph user community will rather be very happy with them. But they certainly broke some glass with the Gluster deal
242
<fghaas> so they better learn the right lessons from their own history :)
243
<Vacum> perhaps we can trace the RH thing a bit more closely than every 4 weeks?
244
<loicd> fghaas: did then ?
245
<loicd> did they ?
246
-*- loicd knows nothing about the Gluster deal
247
<loicd> Vacum: how do you mean ?
248
<fghaas> loicd: oh yeah, there were quite a few Gluster, Inc. customers they pissed off by just not offering GlusterFS support on RHEL, and instead forcing customers to go with RHS if they wanted GlusterFS support
249
<loicd> ah
250
<loicd> indeed
251
<kraken> http://i.imgur.com/bQcbpki.gif
252
<loicd> dam
253
<loicd> ahah
254
<Vacum> loicd: perhaps have a 30 minute "Ceph User Commitee Special" in 2 weeks only for that topic?
255
<mourgaya> argh!!
256
<Vacum> not good?
257
<loicd> Vacum: if you're go to organize this, I'm in !
258
<Vacum> loicd: ha, who wants to spend 8 hours a week for the Commitee? :D
259
<janos_> so it sounds like with the RH deal there are two camps with very different concerns - those who use the product as-is puublicly and those with support contracts
260
<loicd> Vacum: let's discuss this after the meeting.
261
<scuttlemonkey> loicd: I'm happy to organize ad hoc meetings for this topic as I uncover answers WRT foundation
262
<janos_> the public crew shouldn't really see anything but general benefit
263
<janos_> imo
264
<Vacum> janos_: actually I'm currently in the limbo between both - _because_ of the acquisition
265
<loicd> scuttlemonkey: ok !
266
<fghaas> yeah janos_, jftr, I don't think anyone complains about RHT's stewardship of GlusterFS the project
267
<loicd> fghaas: so you're generally happy about this deal ?
268
-*- loicd remembers that we should keep some time for the CephFS topic, 20 minutes left ;-)
269
<janos_> ooh oho there's new stuff to say about cephFS?
270
<loicd> dmsimard: you had a use case to discuss IIRC ?
271
<dmsimard> loicd: yeah, I can talk a bit about a use case I have for CephFS
272
-*- loicd listens
273
<fghaas> loicd: re your question, I'm all for people striking rich that I like and whose work I deeply respect :)
274
<janos_> fghaas, haha, yes i agree
275
<dmsimard> iWeb is a mirror for a lot of open source distributions, some of which are officially recognized mirrors by upstreams - http://mirror.iweb.com/
276
<loicd> fghaas: :-)
277
<dmsimard> Being a mirror means having to provide a lot of space, a lot of network throughput
278
-*- loicd clicks
279
<loicd> dmsimard: lots as in how much ?
280
<iggy> aww man, click spam again!
281
-*- iggy kids
282
<loicd> iggy: :-D
283
-*- pvh_sa listens (<-- is a cephfs user, silly me, but I got my reasons)
284
<dmsimard> Right now we're hovering around 40TB of data
285
<loicd> all on CephFS ?
286
<dmsimard> No, not on CephFS.
287
<dmsimard> I wish it could be, though.
288
<loicd> I should not interupt and let you finish with the use case ;-)
289
<dmsimard> Right now the data resides on multiple JBODs daisy-chained with a head.
290
<dmsimard> It's hard to scale effectively, not as highly available as we wish it could be
291
<xarses> dmsimard: doesn't radosgw swift/S3 API make more sense for that?
292
<xarses> mabe a web server to make it look like a fs again
293
<loicd> is there such a thing ?
294
<xarses> there should be =)
295
<dmsimard> I don't know if it could be done, some mirrors are different than others - some push, some pull, etc.
296
<dmsimard> Anyway, I brainstormed about doing this with CephFS and it'd look like this: http://dmsimard.com/wp-content/uploads/2014/04/mirror_logical.jpg
297
<loicd> when are you planning to deploy this ?
298
<dmsimard> This provides the ability to easily scale a highly available storage backend, scale the amount of webservers - probably in 1Gbps increments - as more network throughput is required
299
<dmsimard> Right now all the mirrors are hosted on this single web/storage beast
300
<dmsimard> Having the setup above would allow us to scale each mirror according to it's own needs, adding mirrors would be simple.
301
<loicd> I wonder if anyone has done this before. It looks like a natural / simple fit.
302
<dmsimard> loicd: I know that OVH fairly recently touted they used Ceph for their mirror infrastructure (this was after I brainstormed the above!). I don't know if they use block or CephFS.
303
<loicd> Would you like to create a CephFS use case page and add yours ?
304
<mo-> imagine those "webservers" (shouldnt they be FTP/rsync servers?) were openvz containers, having its datastore on cephfs... perfect segway
305
<Vacum> dmsimard: you will likely want to cache on the front-facing mirror servers nevertheless IMO
306
<loicd> I've not heard of OVH lately but they are rather secretive about what they do.
307
<loicd> mo-: did you try this already ?
308
<dmsimard> loicd: https://twitter.com/olesovhcom/status/433982909729763328
309
<mo-> no, I was just saying. this doesnt seem very different from the openvz container usecase
310
<Vacum> dmsimard: I would imagine you have high peaks on the same few files. ie every time a new version is published and all people DL the same .iso?
311
-*- loicd should live in the 21st century
312
<loicd> mo-: that makes a lot of sense
313
<loicd> dmsimard: how much bandwidth is your mirror having ? peak time ?
314
<dmsimard> mo-: The frontend servers would be very identical indeed, with only the mirror pool subfolder changing - we in fact planned to leverage Openstack (perhaps with Heat) to scale easily.
315
<loicd> dmsimard: thanks for the link
316
<mo-> if these front servers were openvz containers, they could be HA-managed as well, no need to manually mess with HA on application-level then
317
<dmsimard> loicd: Don't have the data on hand, the server on which the server resides has a 4Gbps LACP link, haven't heard of it being maxed. I know it's more than 1Gbps though.
318
<loicd> ok.
319
<mongo_> What type of files?
320
<mongo_> what size?
321
<dmsimard> mo-: I'm not personally familiar with OpenVZ :( I would need to look into it maybe.
322
<loicd> mongo_: I would assume mostly iso + tarbals + packages
323
<dmsimard> mongo_: Linux distribution mirrors, so binary data packages, iso, tarballs
324
<dmsimard> We're also a mirror for sourceforge so a lot of binary there.
325
<mo-> its like BSD jails. many systems running on one host with almost zero virtualisation overhead. much more efficient than hardware virtualisation in fact
326
<mongo_> I just use nginx, I have it try the local file, local peer and if not it grabs it from upstream and saves the file locally
327
<mongo_> much easier to scale and far less complicated to maintain.
328
<loicd> Last week I was at http://www.capitoul.org/ProgrammeReunion20140424 and people from R&D in universities were very attracted by CephFS. Mostly for legacy applications.
329
<dmsimard> mongo_: Yes, of course some caching layer would be involved. The graph I linked earlier is super high-level
330
<mongo_> you would be better off with radosgw as it has built in geo replication
331
<loicd> dmsimard: do you feel something is missing from CephFS that would make things easier to setup for this use case ?
332
<mongo_> ceph-fs is not really ready for prime time right now.
333
<loicd> mongo_: how would you mirror things without cephfs ?
334
<dmsimard> loicd: I was able to setup/puppetize and use CephFS fairly easily in my continuous integration infrastructure. What's stopping me is the red light that it's not production ready.
335
<loicd> you would need to write software
336
<loicd> ok :-)
337
<dmsimard> I know what it's mostly the active-active MDS scenario and the dynamic subtree partioning that was most unstable last I heard
338
<mo-> I would wager that deduplication for cephfs would be a great fit for such a mirror system
339
<loicd> mo-: +2
340
<loicd> We have 2 minutes left.
341
<Vacum> make that "deduplication for rados would be a great fit" :D
342
<dmsimard> mo-:        Deduplication is great if you have the same data all over the place, this is not my case here though ?
343
<loicd> I'll announce the next meeting (early june) on the mailing list.
344
<mourgaya> dmsimard: dmsimard: +1
345
<loicd> If you're lucky enough to go to Atlanta next week, don't miss the meetup ! http://www.meetup.com/Ceph-in-Atlanta/ :-)
346
<dmsimard> Lots of iWeb folks going to the summit, not me unfortunately :(
347
<scuttlemonkey> or the design session!
348
<loicd> And if you're in Berlin (lucky too, it's a great time to be there) : http://www.meetup.com/Ceph-Berlin/events/179186672/ is an opportunity to meet Ceph people.
349
<scuttlemonkey> http://openstacksummitmay2014atlanta.sched.org/event/ddecd66323efb0c83baeb1bbc1d9556e#.U2PrHuZdW6w
350
<scuttlemonkey> that is a mini-CDS for OpenStack-related devel work discussion
351
<loicd> scuttlemonkey: :-)
352
<Vacum> btw, when is the next online CDS planned? :)
353
<scuttlemonkey> Vacum: haven't set a date yet... I was waiting to see what the timetable looked like in a post-firefly release world
354
<loicd> We're running out of time but we can keep going on #ceph-devel :-)
355
<loicd> Thank you everyone !
356
</pre>