Project

General

Profile

10 Commands Every Ceph Administrator Should Know » History » Version 1

Jessica Mack, 06/11/2015 01:28 AM

1 1 Jessica Mack
h1. 10 Commands Every Ceph Administrator Should Know
2
3
If you've just started working with Ceph, you already know there's a lot going on under the hood. To help you in your journey to becoming a Ceph master, here's a list of 10 commands every Ceph cluster administrator should know. Print it out, stick it to your wall and let it feed your Ceph mojo!
4
5
*1. Check or watch cluster health: ceph status || ceph -w*
6
If you want to quickly verify that your cluster is operating normally, use _ceph status_ to get a birds-eye view of cluster status (hint: typically, you want your cluster to be _active + clean_). You can also watch cluster activity in real-time with _ceph -w_; you'll typically use this when you add or remove OSDs and want to see the placement groups adjust.
7
8
*2. Check cluster usage stats: ceph df*
9
To check a cluster’s data usage and data distribution among pools, use _ceph df_. This provides information on available and used storage space, plus a list of pools and how much storage each pool consumes. Use this often to check that your cluster is not running out of space.
10
11
*3. Check placement group stats: ceph pg dump*
12
When you need statistics for the placement groups in your cluster, use _ceph pg dump_. You can get the data in JSON as well in case you want to use it for automatic report generation.
13
14
*4. View the CRUSH map: ceph osd tree*
15
Need to troubleshoot a cluster by identifying the physical data center, room, row and rack of a failed OSD  faster? Use _ceph osd tree_, which produces an ASCII art CRUSH tree map with a host, its OSDs, whether they are up and their weight.
16
17
*5. Create or remove OSDs: ceph osd create || ceph osd rm*
18
Use _ceph osd create_ to add a new OSD to the cluster. If no UUID is given, it will be set automatically when the OSD starts up. When you need to remove an OSD from the CRUSH map, use _ceph osd rm_ with the UUID.
19
20
*6. Create or delete a storage pool: ceph osd pool create || ceph osd pool delete*
21
Create a new storage pool with a name and number of placement groups with _ceph osd pool create_. Remove it (and wave bye-bye to all the data in it) with _ceph osd pool delete_.
22
23
*7. Repair an OSD: ceph osd repair*
24
Ceph is a self-repairing cluster. Tell Ceph to attempt repair of an OSD by calling _ceph osd repair_ with the OSD identifier.
25
26
*8. Benchmark an OSD: ceph tell osd.* bench*
27
Added an awesome new storage device to your cluster? Use _ceph tell_ to see how well it performs by running a simple throughput benchmark. By default, the test writes 1 GB in total in 4-MB increments.
28
29
*9. Adjust an OSD’s crush weight: ceph osd crush reweight*
30
Ideally, you want all your OSDs to be the same in terms of thoroughput and capacity...but this isn't always possible. When your OSDs differ in their key attributes, use _ceph osd crush reweight_ to modify their weights in the CRUSH map so that the cluster is properly balanced and OSDs of different types receive an appropriately-adjusted number of I/O requests and data.
31
32
*10. List cluster keys: ceph auth list*
33
Ceph uses keyrings to store one or more Ceph authentication keys and capability specifications. The _ceph auth list_ command provides an easy way to to keep track of keys and capabilities