Project

General

Profile

Ceph User Committee meeting 2014-05-02 » History » Version 3

Jessica Mack, 05/25/2015 11:17 PM

1 1 Jessica Mack
h1. Ceph User Committee meeting 2014-05-02
2
3
h3. Executive summary
4
 
5
The agenda was:
6
* Elections
7
* RedHat and Inktank
8
* CephFS
9
* Meetings
10
 
11
Action: The Ceph User Committee will express the need for a Ceph Foundation, from the user perspective
12
Action: Patrick McGarry will organize a RedHat acquisition meeting in two weeks time
13
 
14
Note: Patrick McGarry participated in the meeting and answered questions, as can be read from the logs below. The executive summary  focuses on the points raised by users instead of attempting to summarize the dialog.
15
16
17
h3. Elections
18
19
It will happen this month, details here : https://wiki.ceph.com/Community/2014-04_Ceph_User_Committee_Elections
20
21
h3. RedHat and Inktank
22
23
Positive: stewardship of other projects seem fine.
24
Concern: support for non RHEL OS
25
Positive: better support for Fedora
26
Concern: the project is sold to RedHat, engineers and trademark, the people who were in charge now answer to someone else
27
Hope: inifiniband support boost from RedHat
28
Positive: RedHat has experience with maintaining software production ready and supporting customers
29
Concern: what does it mean from the point of view of Inktank customers
30
Positive: greater potential for development gains between ceph and kvm
31
Concern: the [[Foundation|Ceph foundation]] becomes more necessary than ever, to establish a diverse governance, will RedHat agree to it ?
32
Concern: Sage had the skills and was empowered to be the Ceph benevolent dictator for life. The skill remains but he has less power over the project.
33
Clarification: Inktank acquisition by RedHat should not be confused with MySQL acquisition by Oracle. The copyright is intentionaly fragmented and cannot be sold.
34
Feedback: Cloudwatt management reacted positively to the acquisition.
35
Positive: Calamari will be published under a Free Software license
36
Confusion: what does it mean for GlusterFS, really ? Features ? Sale points ? Development roadmap ? What Inktank products / services, training will remain ? etc.
37
Concern: can Ceph, as a software, be reasonably independant from the service provider side of Inktank / RedHat ?
38
Concern: who coordinates the development, roadmap, feature list ? The Ceph Foundation or RedHat ?
39
Concern: users must apply pressure for the Ceph Foundation to happen, RedHat has little incentive to agree to it spontaneously.
40
Action: we, users, should express our desire for a Ceph Foundation, with testimonials collected from various people
41
Concern: should things go bad and a fork becomes necessary, all the driving forces of the project are currently under Inktank / RedHat influence
42
Comparison: Qumranet acquisition is perceived to have been beneficial to KVM
43
Concern: some Gluster, Inc customers were unsatisfied after the RedHat acquisition, could it happen to Inktank customers also ?
44
Action: Patrick McGarry will organize a RedHat acquisition meeting in two weeks time
45
46
h3. CephFS
47
48
Use case: project to replace a 40TB cluster used to host mirrors for distributions (binary data packages, iso, tarballs), delivering more than 1Gb/s and less than 4Gb/s
49
* http://dmsimard.com/wp-content/uploa...or_logical.jpg
50
* suggestions to use object instead of CephFS, with adhoc software
51
* if these "webservers"  were openvz containers, having its datastore on cephfs
52
* if front servers were openvz containers, they could be HA-managed
53
* the only blocking factor is CephFS not being production ready: mostly the active-active MDS scenario and the dynamic subtree partioning that was unstable
54
* the plan is to deploy with puppet
55
* deduplication would help
56
57
Use case: OVH.com does something with Ceph (CephFS ?) : https://twitter.com/olesovhcom/statu...82909729763328
58
59
Use case: French academic community meeting discussed how CephFS could be used (no record) http://www.capitoul.org/ProgrammeReunion20140424
60
61
h3. Meetups
62
 
63
64
All meetups https://wiki.ceph.com/Community/Meetups
65
May 13th, Atlanta : http://openstacksummitmay2014atlanta...e#.U2PrHuZdW6w
66
May 11th, Atlanta : http://www.meetup.com/Ceph-in-Atlanta/
67
May 9th, Berlin : http://www.meetup.com/Ceph-Berlin/events/179186672/
68
69
70
h3. Log
71
72
<pre>
73
<loicd> Welcome to the Ceph User Committee meeting #2 ! https://wiki.ceph.com/Community/Meetings#Proposed_topics:
74
<loicd> scuttlemonkey: will join a little late
75
<Vacum> Hi :)
76
<janos_> ooh #2!
77
<loicd> ahah
78
<janos_> do we have to wear red-colored hats?
79
<janos_> ;)
80
<loicd> I propose we get the easy stuff / boring things out of the way first and give a chance to the people who are late to join
81
<loicd> janos_: not yet I hope
82
<janos_> lol
83
<scuttlemonkey> I'm here! :)
84
<loicd> First topic : the elections     
85
<loicd> as promised I'll send a mail later today to setup the elections of the Ceph User Committee head
86
<loicd> it will be interesting in the new context ;-)
87
<loicd> I will apply and mourgaya proposed to apply too
88
<loicd> is there anyone else interested ?
89
-*- janos_ keeps his hand down
90
<loicd> the idea is that you spend ~8 hours a week on average caring for the ceph user committee. It's not much but still ;-)
91
<loicd> mourgaya: are you still on board for this ?
92
<mourgaya> yes!
93
<loicd> cool
94
<loicd> now to more fun things
95
<loicd> redhat and inktank, what do people think ?
96
-*- loicd tend to be a kill joy and won't speak first ;-)
97
<janos_> i think it's good. RH's stewardship of other projects seems to have been good
98
<janos_> not overbearing
99
<lesserevil> re: inktank->redhat: optimistically cautious
100
<Serbitar> my conern as others have raised is the ability to get support for non rhel OS
101
<janos_> plus i think now i'll get the f20 builds i've been dreaming of
102
<janos_> ;)
103
<kevincox> I think that it will be good for the project.
104
<janos_> yeah i can understand the support concerns for other distro's
105
<scuttlemonkey> I know I'm inside the beast, but I think it's a good move.  However, it may be a bit of a paradigm shift in 
106
long-term planning for things like foundation
107
<loicd> Serbitar: do we know how much time redhat actually invests in supporting kvm for other os (for instance) ?
108
<Vacum> a bit surprising. during the Frankfurt Ceph Day, the general statement from Inktank and Ceph was "we won't sell out".
109
 at least it sounded like it
110
<pressureman> i hope that infiniband support will get a boost from redhat
111
<Serbitar> loicd: i do not
112
<scuttlemonkey> fwiw I know sage is working very hard to ensure that support for non-rhel setups is strong
113
<loicd> Vacum: it can be percieved as a sell out indeed.
114
<mourgaya> ceph can have  the benefits of redhat  landing production!
115
<scuttlemonkey> and for now inktank is still selling and supporting ubuntu/suse
116
<Vacum> I'm a bit concerned about the transistion peroid, also from a commercial support view
117
<loicd> Vacum: how do you mean ?
118
<Vacum> scuttlemonkey: "for now" isn't really something a business can rely on when it comes to setting up petabyte storage
119
<scuttlemonkey> Vacum: I absolutely agree
120 2 Jessica Mack
<scuttlemonkey> it's just hard to answer definitively as the "how do all the bits and pieces get merged" discussions are still 
121
ongoing
122 1 Jessica Mack
<scuttlemonkey> all I can give you is intent
123
<loicd> scuttlemonkey: this is reassuring and there does not seem to be a risk that other platforms support is dropped any time soon. 
124
I think people express a concern in the long term.
125
<janos_> i'm excited about the greater potential for development gains between ceph and kvm
126
<loicd> janos_: I did not think about that, you're correct !
127
<Vacum> loicd: we have a 12 month pre-production support that will run another 8 months. and we were planning on contracting the 
128
enterprise support. now its totally open if such a thing will be available in the (near) future - and to which conditions
129
<scuttlemonkey> vacum: you're ubuntu I'm assuming?
130
<Vacum> scuttlemonkey: debian
131
<scuttlemonkey> ahh
132
<loicd> Vacum: could you spell the name of your company for the record ? Unless it's confidential of course ;-)
133
<Vacum> loicd: I can spell it per PM :)
134
<loicd> now that redhat is there, the foundation becomes more necessary than ever
135
<Vacum> I do see a plus on the whole thing from a commercial perspective though. RH does have a long history in providing enterprise 
136
support and they know it all. inktank can benefit from that
137
<loicd> in the past, as a user, I felt confident that Sage could be a benevolent dictator in the broad sense of the term, not just 
138
technical. Now that redhat is involved, there needs to be a diverse governance of some kind.
139
<mourgaya> is redhat keeping the inktank support team, and there reactivity?
140
<loicd> Vacum: +1
141 2 Jessica Mack
<scuttlemonkey> loicd: that's my take as well, but there is mixed opinion from the folks involved....so I'm looking forward to the 
142
discussions
143
<amichel> Question about the repositories for ubuntu/debian. I'm doing a deploy on 14.04 trusty and the ceph-extras repo doesn't 
144
seem to have trusty packages. Is ceph-extras not needed on 14.04 or is there a trick I'm missing?
145 1 Jessica Mack
<Vacum> loicd: I totally agree. "Ceph" as a brand (and I didn't use trademark on purpose!) should not fall into the hand of a company
146
<scuttlemonkey> mourgaya: Inktank is remaining intact as an org until we can ensure that the transition wont change support response
147
<loicd> mourgaya: we don't know. But as a group of users I think we should see the broader consequences. In a few years from now, if all 
148 2 Jessica Mack
goes well, ceph will be some kind of kvm. Widely spread and adopted. Is this what we would like ? Would our use cases be satisfied by 
149
such an outcome ?
150 1 Jessica Mack
<loicd> amichel: we're having a meeting (ceph user committee). Do you mind if we postpone the answer for another 45 minutes ?
151
<amichel> No problem at all, I didn't realize
152
<Vacum> from a brand (and even trademark) perspective, look at MySQL. They sold to Sun, which was kind of cool. and now its at Oracle...
153
<nobody18188181> Vacum: But now MariaDB is taking over ;)
154
<loicd> dmsimard: what  does iweb think of this move ?
155
<loicd> Vacum: MySQL copyright was sold to oracle. That cannot happen with Ceph.
156
<Vacum> nobody18188181: yes, Maria brings a lot of new stuff (no wonder, coming from Monty). but just look at the channel's activities...
157
<Vacum> loicd: the trademark was sold too
158
<loicd> the copyright is intentionaly fragmented
159
<mourgaya> good!
160
<nobody18188181> What do they do in their channel?
161
<dmsimard> I can speak for myself, not quite on behalf of iWeb as is - I'm happy for Inktank and that Calamari will be open sourced.
162
 I am really curious as to what will happen with Gluster since, to me, Ceph is a natural competitor to gluster.
163
<loicd> it was a wise decision.
164
<janos_> i don't imagine much will happen to gluster
165
<loicd> dmsimard: +1
166
<Vacum> nobody18188181: I mean IRC channels. compare the activity of both
167
<janos_> RH will likely be happy selling support for both
168
<mourgaya> radhat is now  the leader of the futur of storage :-)
169
<nobody18188181> Ah, I havent been to the maria or mysql channels so I cant speak on them
170
<dmsimard> I am not super familiar with Gluster but does it do things that Ceph does not do ?
171
<dmsimard> That's kind of where I am getting at
172
<loicd> My company ( Cloudwatt ) has reacted positively to the announcement.
173 2 Jessica Mack
<loicd> The marketing director came to me and showed the planned cooperation with RedHat. He said : "we'll add a Ceph line there". 
174
And that was it.
175
<Serbitar> dmsimard: i guess it would be that cepgh has more functionality than gluster, with block object and file stores vs glusters'
176
 file store
177 1 Jessica Mack
<Serbitar> evne though csphfs isnt commercially supported yet
178
<loicd> Is anyone around here actually using gluster ?
179 2 Jessica Mack
<dmsimard> So from a Redhat perspective, do you continue to develop both ? Do you focus your efforts on Ceph ? This is what I am curious
180
 to see how it plays out.
181
<nobody18188181> loicd: Per recommendation of a friend I'm trying to use ceph (gluster was going to be my first choice); but if i cant get
182
 ceph working then I'm going to have to try that.
183
<Vacum> I'm a bit on the cautious side. On the RH announcement they are talking a lot about Inktank's "products". Do they mean the services
184
 with that. Or Ceph itself?
185 1 Jessica Mack
<loicd> nobody18188181: I see
186 2 Jessica Mack
<nobody18188181> loicd: I chose ceph because a good friend of mine indicated to me that ceph is vastly superior in performance compared to
187
 gluster; so of course that part wins me over.
188
<scuttlemonkey> Vacum: the Inktank "product" is "Inktank Ceph Enterprise"...which is Calamari + Support/Services...there is also 
189
training/consulting
190 1 Jessica Mack
<loicd> We can only speculate and hope for the best. In terms of timeframe, I bet we'll know where we stand in a year or so.
191 2 Jessica Mack
<Vacum> also, during Frankfurt Ceph Day, Sage talked about keeping Ceph as the product/solution and Inktank as service provider seperate. 
192
Is that even possible with RH?
193 1 Jessica Mack
<loicd> Vacum: with a foundation it is.
194
<lesserevil> loicd: +1
195
<Vacum> loicd: But then, who is the "coordinator" of Ceph's development? The Foundation, or RH?
196 2 Jessica Mack
<mo-> as somebody that's been trying to tell people that Ceph is worth a look (or two), I find it a BIG plus to be able to add that it is a
197
 RH supported solution now
198 1 Jessica Mack
<Vacum> Who, authoritively, will decide what goes in as a feature and what not?
199 2 Jessica Mack
<loicd> Will RedHat agree to a foundation holding the trademark, I would not be on it. But that depends on us (users) and the developers 
200
from the community.
201 1 Jessica Mack
<janos_> mo- good point when you have to make that pitch
202 2 Jessica Mack
<scuttlemonkey> Vacum: the idea would be foundation as a central clearinghouse for development...but each contributing org would have their
203
 own plans/roadmap (including RH)
204 1 Jessica Mack
<scuttlemonkey> if such a foundation were to occur, Sage would still be BDFL and decide what goes in, and how
205
<Vacum> so the "upstream" of everything would be The Foundation
206
<scuttlemonkey> yeah
207
<Vacum> that would be nice
208
<scuttlemonkey> that's my hope
209
<Vacum> mine too :)
210
<nobody18188181> ok i found a bug how can i report it quickly?
211
<scuttlemonkey> all depends on how RH sees the future
212 2 Jessica Mack
<loicd> Vacum: sage must be the benevolent dictator for life. At least I believe it's necessary because of the Ceph dymanic. A personal 
213
opinion based on observation and betting that what happened in the past will work in the future ;-)
214 1 Jessica Mack
<loicd> If I was RedHat I would not allow the creation of a foundation. Unless there is significant pressure from the community.
215
<Vacum> exactly
216
<scuttlemonkey> yeah
217
<scuttlemonkey> and to be fair there are a number of great single-vendor FOSS projects
218
<loicd> I propose that we voice, loud and clear, what we would like to see in a foundation. And why we think it is necessary.
219
<scuttlemonkey> so it'll be an interesting discussion at least :)
220
<loicd> scuttlemonkey: right :-)
221
<scuttlemonkey> please do
222
<Vacum> +1
223
<scuttlemonkey> I have spent several months thinking about a foundation
224
<xarses> should we create a petition?
225
<scuttlemonkey> so I'd love to have new information injected into those thoughts
226
<scuttlemonkey> xarses: not necessary...Sage and I are already on the path
227
<loicd> https://wiki.ceph.com/Development/Foundation has your ideas right ?
228
<scuttlemonkey> you could contribute to the wiki doc though
229
<mourgaya> foundation does not depend of redhat, ceph is an open source solution  right?
230
<scuttlemonkey> loicd: only the very highest level brush strokes...but yes
231
<scuttlemonkey> mourgaya: the point of the foundation would be to hold the trademarks in trust for the community
232
<loicd> xarses: something that looks like a petition without the controversial tone would be nice
233
<janos_> an assertion
234
<scuttlemonkey> without Red Hat's donation of those marks the foundation really can't happen
235
<scuttlemonkey> loicd: xarses: I propose we just create an "interested parties" section on the foundation doc
236
<scuttlemonkey> for those who are interested in seeing it happen
237 2 Jessica Mack
<loicd> mourgaya: the dynamic of the project depends on redhat now. And a fork would be most difficult. The idea of a foundation is to make
238
 such a fork unecessary, forever, because all interests are represented.
239
<Vacum> a fork wouldn't have much chances. too much happening in the code at the moment. only if Sage and a few other key devs would create
240
 that fork themselves, it would stand a chance
241 1 Jessica Mack
<Vacum> see Maria...
242
<scuttlemonkey> hehe
243
<loicd> scuttlemonkey: having a document where people can freely express their thoughts, even if not polished, would be useful
244
<scuttlemonkey> loicd: so you're thinking a "talk" page in addition to the brainstorm doc?
245
<loicd> right
246 2 Jessica Mack
<fghaas> um, can I just inject one thought here since scuttlemonkey asked for it: is anyone under the impression that RHT fucked up KVM, 
247
post-Qumranet acquisition?
248 1 Jessica Mack
<loicd> xarses: is this something like this you had in mind ?
249
<janos_> fghaas, not that i can tell
250
<xarses> loicd: something like that
251
<mourgaya> how can we have  redhat position about a ceph foundation?
252 2 Jessica Mack
<loicd> fghaas: Inktank can't compare to Qumranet because they had a proprietary software base to begin with. Intkank is a Free Software
253
 shop and this is a significant difference.
254 1 Jessica Mack
<scuttlemonkey> mourgaya: I will be sharing that info as we start the discussions
255
<xdeller> fghaas: absolutely not, except some directions, like state replication, was abandoned
256 2 Jessica Mack
<fghaas> loicd, KVM was always free software. The *management products* around KVM were not. Ceph is free software, Calamari is not. I 
257 3 Jessica Mack
maintain there's significantly less difference than you think. And yes, RHT would have us believe that RHEV-M is The best 
258
Thing Since Sliced Bread™ for a few years, but then OpenStack set them straight
259 1 Jessica Mack
<loicd> fghaas: we can debate this historical thing later ;-)
260
<loicd> Should we move to more technical topics or someone has more to say about the redhat acquisition ?
261
<Vacum> its a bit early for outsider to have more insight to talk in-depth about it :)
262
<loicd> true ;-)
263 3 Jessica Mack
<fghaas> so I'm with janos_ and xdeller here; I think RHT has been a fine steward of KVM, and if they follow *that* precedent then the
264
 Ceph user community will rather be very happy with them. But they certainly broke some glass with the Gluster deal
265 1 Jessica Mack
<fghaas> so they better learn the right lessons from their own history :)
266
<Vacum> perhaps we can trace the RH thing a bit more closely than every 4 weeks?
267
<loicd> fghaas: did then ?
268
<loicd> did they ?
269
-*- loicd knows nothing about the Gluster deal
270
<loicd> Vacum: how do you mean ?
271 3 Jessica Mack
<fghaas> loicd: oh yeah, there were quite a few Gluster, Inc. customers they pissed off by just not offering GlusterFS support on RHEL,
272
 and instead forcing customers to go with RHS if they wanted GlusterFS support
273 1 Jessica Mack
<loicd> ah
274
<loicd> indeed
275
<kraken> http://i.imgur.com/bQcbpki.gif
276
<loicd> dam
277
<loicd> ahah
278
<Vacum> loicd: perhaps have a 30 minute "Ceph User Commitee Special" in 2 weeks only for that topic?
279
<mourgaya> argh!!
280
<Vacum> not good?
281
<loicd> Vacum: if you're go to organize this, I'm in !
282
<Vacum> loicd: ha, who wants to spend 8 hours a week for the Commitee? :D
283 3 Jessica Mack
<janos_> so it sounds like with the RH deal there are two camps with very different concerns - those who use the product as-is puublicly
284
 and those with support contracts
285 1 Jessica Mack
<loicd> Vacum: let's discuss this after the meeting.
286
<scuttlemonkey> loicd: I'm happy to organize ad hoc meetings for this topic as I uncover answers WRT foundation
287
<janos_> the public crew shouldn't really see anything but general benefit
288
<janos_> imo
289
<Vacum> janos_: actually I'm currently in the limbo between both - _because_ of the acquisition
290
<loicd> scuttlemonkey: ok !
291
<fghaas> yeah janos_, jftr, I don't think anyone complains about RHT's stewardship of GlusterFS the project
292
<loicd> fghaas: so you're generally happy about this deal ?
293
-*- loicd remembers that we should keep some time for the CephFS topic, 20 minutes left ;-)
294
<janos_> ooh oho there's new stuff to say about cephFS?
295
<loicd> dmsimard: you had a use case to discuss IIRC ?
296
<dmsimard> loicd: yeah, I can talk a bit about a use case I have for CephFS
297
-*- loicd listens
298
<fghaas> loicd: re your question, I'm all for people striking rich that I like and whose work I deeply respect :)
299
<janos_> fghaas, haha, yes i agree
300 2 Jessica Mack
<dmsimard> iWeb is a mirror for a lot of open source distributions, some of which are officially recognized mirrors by upstreams - 
301
http://mirror.iweb.com/
302 1 Jessica Mack
<loicd> fghaas: :-)
303
<dmsimard> Being a mirror means having to provide a lot of space, a lot of network throughput
304
-*- loicd clicks
305
<loicd> dmsimard: lots as in how much ?
306
<iggy> aww man, click spam again!
307
-*- iggy kids
308
<loicd> iggy: :-D
309
-*- pvh_sa listens (<-- is a cephfs user, silly me, but I got my reasons)
310
<dmsimard> Right now we're hovering around 40TB of data
311
<loicd> all on CephFS ?
312
<dmsimard> No, not on CephFS.
313
<dmsimard> I wish it could be, though.
314
<loicd> I should not interupt and let you finish with the use case ;-)
315
<dmsimard> Right now the data resides on multiple JBODs daisy-chained with a head.
316
<dmsimard> It's hard to scale effectively, not as highly available as we wish it could be
317
<xarses> dmsimard: doesn't radosgw swift/S3 API make more sense for that?
318
<xarses> mabe a web server to make it look like a fs again
319
<loicd> is there such a thing ?
320
<xarses> there should be =)
321
<dmsimard> I don't know if it could be done, some mirrors are different than others - some push, some pull, etc.
322 2 Jessica Mack
<dmsimard> Anyway, I brainstormed about doing this with CephFS and it'd look like this: 
323 1 Jessica Mack
http://dmsimard.com/wp-content/uploads/2014/04/mirror_logical.jpg
324 2 Jessica Mack
<loicd> when are you planning to deploy this ?
325 1 Jessica Mack
<dmsimard> This provides the ability to easily scale a highly available storage backend, scale the amount of webservers
326 2 Jessica Mack
 - probably in 1Gbps increments - as more network throughput is required
327
<dmsimard> Right now all the mirrors are hosted on this single web/storage beast
328 1 Jessica Mack
<dmsimard> Having the setup above would allow us to scale each mirror according to it's own needs, adding mirrors would be simple.
329
<loicd> I wonder if anyone has done this before. It looks like a natural / simple fit.
330 3 Jessica Mack
<dmsimard> loicd: I know that OVH fairly recently touted they used Ceph for their mirror infrastructure (this was after I brainstormed
331
 the above!). I don't know if they use block or CephFS.
332 2 Jessica Mack
<loicd> Would you like to create a CephFS use case page and add yours ?
333 3 Jessica Mack
<mo-> imagine those "webservers" (shouldnt they be FTP/rsync servers?) were openvz containers, having its datastore on cephfs... 
334
perfect segway
335 1 Jessica Mack
<Vacum> dmsimard: you will likely want to cache on the front-facing mirror servers nevertheless IMO
336
<loicd> I've not heard of OVH lately but they are rather secretive about what they do.
337
<loicd> mo-: did you try this already ?
338
<dmsimard> loicd: https://twitter.com/olesovhcom/status/433982909729763328
339
<mo-> no, I was just saying. this doesnt seem very different from the openvz container usecase
340 3 Jessica Mack
<Vacum> dmsimard: I would imagine you have high peaks on the same few files. ie every time a new version is published and all people
341
 DL the same .iso?
342 2 Jessica Mack
-*- loicd should live in the 21st century
343 1 Jessica Mack
<loicd> mo-: that makes a lot of sense
344
<loicd> dmsimard: how much bandwidth is your mirror having ? peak time ?
345
<dmsimard> mo-: The frontend servers would be very identical indeed, with only the mirror pool subfolder changing
346 2 Jessica Mack
 - we in fact planned to leverage Openstack (perhaps with Heat) to scale easily.
347
<loicd> dmsimard: thanks for the link
348 3 Jessica Mack
<mo-> if these front servers were openvz containers, they could be HA-managed as well, no need to manually mess with HA on 
349
application-level then
350
<dmsimard> loicd: Don't have the data on hand, the server on which the server resides has a 4Gbps LACP link, haven't heard of it
351
 being maxed. I know it's more than 1Gbps though.
352 1 Jessica Mack
<loicd> ok.
353
<mongo_> What type of files?
354
<mongo_> what size?
355
<dmsimard> mo-: I'm not personally familiar with OpenVZ :( I would need to look into it maybe.
356
<loicd> mongo_: I would assume mostly iso + tarbals + packages
357
<dmsimard> mongo_: Linux distribution mirrors, so binary data packages, iso, tarballs
358
<dmsimard> We're also a mirror for sourceforge so a lot of binary there.
359 2 Jessica Mack
<mo-> its like BSD jails. many systems running on one host with almost zero virtualisation overhead. much more efficient than hardware 
360
virtualisation in fact
361 1 Jessica Mack
<mongo_> I just use nginx, I have it try the local file, local peer and if not it grabs it from upstream and saves the file locally
362
<mongo_> much easier to scale and far less complicated to maintain.
363 2 Jessica Mack
<loicd> Last week I was at http://www.capitoul.org/ProgrammeReunion20140424 and people from R&D in universities were very attracted by CephFS.
364
 Mostly for legacy applications.
365 1 Jessica Mack
<dmsimard> mongo_: Yes, of course some caching layer would be involved. The graph I linked earlier is super high-level
366
<mongo_> you would be better off with radosgw as it has built in geo replication
367
<loicd> dmsimard: do you feel something is missing from CephFS that would make things easier to setup for this use case ?
368
<mongo_> ceph-fs is not really ready for prime time right now.
369
<loicd> mongo_: how would you mirror things without cephfs ?
370 2 Jessica Mack
<dmsimard> loicd: I was able to setup/puppetize and use CephFS fairly easily in my continuous integration infrastructure. What's stopping me
371
 is the red light that it's not production ready.
372 1 Jessica Mack
<loicd> you would need to write software
373
<loicd> ok :-)
374
<dmsimard> I know what it's mostly the active-active MDS scenario and the dynamic subtree partioning that was most unstable last I heard
375
<mo-> I would wager that deduplication for cephfs would be a great fit for such a mirror system
376
<loicd> mo-: +2
377
<loicd> We have 2 minutes left.
378
<Vacum> make that "deduplication for rados would be a great fit" :D
379
<dmsimard> mo-:        Deduplication is great if you have the same data all over the place, this is not my case here though ?
380
<loicd> I'll announce the next meeting (early june) on the mailing list.
381
<mourgaya> dmsimard: dmsimard: +1
382
<loicd> If you're lucky enough to go to Atlanta next week, don't miss the meetup ! http://www.meetup.com/Ceph-in-Atlanta/ :-)
383
<dmsimard> Lots of iWeb folks going to the summit, not me unfortunately :(
384
<scuttlemonkey> or the design session!
385 2 Jessica Mack
<loicd> And if you're in Berlin (lucky too, it's a great time to be there) : http://www.meetup.com/Ceph-Berlin/events/179186672/ is an 
386
opportunity to meet Ceph people.
387 1 Jessica Mack
<scuttlemonkey> http://openstacksummitmay2014atlanta.sched.org/event/ddecd66323efb0c83baeb1bbc1d9556e#.U2PrHuZdW6w
388
<scuttlemonkey> that is a mini-CDS for OpenStack-related devel work discussion
389
<loicd> scuttlemonkey: :-)
390
<Vacum> btw, when is the next online CDS planned? :)
391
<scuttlemonkey> Vacum: haven't set a date yet... I was waiting to see what the timetable looked like in a post-firefly release world
392
<loicd> We're running out of time but we can keep going on #ceph-devel :-)
393
<loicd> Thank you everyone !
394
</pre>