Ceph User Committee meeting 2014-04-03 » History » Version 1
Jessica Mack, 05/25/2015 11:23 PM
1 | 1 | Jessica Mack | h1. Ceph User Committee meeting 2014-04-03 |
---|---|---|---|
2 | |||
3 | h3. Executive summary |
||
4 | |||
5 | The agenda was: |
||
6 | * Tiering, erasure code |
||
7 | * Using http://tracker.ceph.com/ |
||
8 | * CephFS |
||
9 | * Miscellaneous |
||
10 | |||
11 | h3. Documentation of the new Firefly features (tiering, erasure code) |
||
12 | |||
13 | * Good: https://www.google.com/search?q=erasure+code+ceph second answer is https://ceph.com/docs/master/dev/erasure-coded-pool/ |
||
14 | * Pain point : ease of use of tiering and erasure code |
||
15 | * Needs clarification : is erasure code beneficial to smaller users ? |
||
16 | * Wish : more tiering and erasure code use cases |
||
17 | * Needs clarification : do erasure code require more work for the MONs ? |
||
18 | * Wish : are there any plans for "glued objects". like adding a bunch of small objects together into one large blob, then EC that blob? |
||
19 | * Needs clarification : "10 DCs" example. It does not show the tradeoff of this solution: to read one object, you have to read from 6 DCs! |
||
20 | https://ceph.com/docs/master/dev/erasure-coded-pool/ |
||
21 | * Needs clarification : relationships between tiering and erasure code because at the moment it looks like tiering is exclusively for caching |
||
22 | |||
23 | h3. Using http://tracker.ceph.com/ |
||
24 | |||
25 | * Wish : allow anonymous bug reports |
||
26 | * Pain point : http://tracker.ceph.com/account/register returns 500 Internal error http://tracker.ceph.com/issues/7609 |
||
27 | |||
28 | h3. CephFS |
||
29 | |||
30 | * Needs clarification : when will CephFS be ready for production ? |
||
31 | * Wish : solid list of show-stoppers to make it prod-ready |
||
32 | * Needs clarification : fsck tool has yet to be developed, manual repair tools |
||
33 | * Wish: wiki page for CephFS use cases |
||
34 | ** store files |
||
35 | ** web content for existing non-ceph-aware applications |
||
36 | ** legacy that need to scale out |
||
37 | ** legacy that needs capacity (RAID arrays can only get so large) |
||
38 | ** backups |
||
39 | ** a filesystem without a SPOF |
||
40 | ** hadoop / HDFS compatibility |
||
41 | ** reexporting as cifs/nfs |
||
42 | ** backing existing tools that use FS |
||
43 | ** distributing images to be local for hypervisor nodes in openstack |
||
44 | ** SAN/NAS stuff |
||
45 | ** HPC in the context of http://www.castep.org/ |
||
46 | ** lustre alternative |
||
47 | ** reduce storage costs / replace netapp |
||
48 | |||
49 | h3. Miscellaneous |
||
50 | |||
51 | * Wish: asynchronous replication at the rados level |
||
52 | * Wish: gzip rados class |
||
53 | * Pain point: see more documentation with regard to decoding the log messages from Ceph daemons |
||
54 | * Pain point: explanations of all the configuration parameters ( config_opts.h) |
||
55 | * Wish: uses cache tiering to say "this slow data can be compressed now" |
||
56 | * Pain point: the mon node frequently FLOODs its log with always the same message : logger could aggregate identical messages |
||
57 | * Needs clarification: is the cache pool ready for production ? |
||
58 | * WIsh: bandwidth reservations / guarantee that pools have a certain amount of iops/throughput available even if other pools are hammering the storage system |
||
59 | * Pain point: samba/netatalk on top of RBD is stable, a bit slow though, need more IOPS |
||
60 | * Pain point: ceph 0.72 with debian's bleeding edge 3.14-rc7 kernel fails btrfs corruption even when a 'very small' ceph cluster has only 3 guest VMs running the phoronix-test-suite disk test on it. |
||
61 | * Wish: "hierarchical near" backfilling, based on crush location. ie 4 replicas, 2 in each rack. instead backfill from primary: backfill from OSD in the same rack |
||
62 | |||
63 | h3. Log |