Project

General

Profile

Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox » History » Version 4

Jessica Mack, 06/22/2015 11:12 PM

1 1 Jessica Mack
h1. Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox
2
3
{{toc}}
4
5
h3. Introducing the Ceph Object Gateway
6
7
Ceph is a highly reliable distributed storage system, with self-healing and self-managing characteristics. One of its unique characteristics is its unified storage interface, which supports object storage, block device storage and file system storage all in the same Ceph cluster. Of course, it's also open source, so you can freely download and experiment with it at your leisure.
8
The Ceph Object Gateway provides a way to host scalable data storage "buckets", similar to those provided by Amazon Simple Storage Service (Amazon S3) and OpenStack Swift. These objects are accessible via a REST API, making them ideal for cloud-based applications, big data storage and processing, and many other use cases. And because the underlying cluster infrastructure is managed by Ceph, fault-tolerance and scalability are guaranteed.
9
Setting up a Ceph object gateway can be a little complex, especially if you're unfamiliar with how scalable object storage works. That's where this tutorial comes in. Over the next few pages, I'll walk you through the process of setting up a Ceph-based object gateway and adding data to it. We'll set up the cluster using VirtualBox, so you'll get a chance to see Ceph's object storage features in action in a "real" environment where you have total control, but which doesn't cost you anything to run or scale out with new nodes.
10
Sounds good? Keep reading.
11
12
h3. Assumptions and Requirements
13
14
For this tutorial, I'll be using VirtualBox, which provides an easy way to set up independent virtual servers, with CentOS as the operating system for the virtual servers. VirtualBox is available for Windows, Linux, Macintosh, and Solaris hosts. I'll make the following assumptions:
15
You have a working knowledge of CentOS, VirtualBox and VirtualBox networking.
16
You have downloaded and installed the latest version of VirtualBox.
17
You have either already configured 5 virtual CentOS servers, or you have downloaded an ISO installation image for the latest version of CentOS (CentOS 7.0 at the time of writing). These servers must be using kernel version 3.10 or later
18
You're familiar with installing software using the yum, the CentOS package manager.
19
You’re familiar with SSH-based authentication.
20
You're familiar with object storage in the cloud.
21
In case you’re not familiar with the above topics, look in the “Read More” section at the end of this tutorial, which has links to relevant guides.
22
To set up a Ceph storage cluster with VirtualBox, here are the steps you'll follow:
23
Create cluster nodes
24
Install the Ceph deployment toolkit
25
Configure authentication between cluster nodes
26
Configure and activate a cluster monitor
27
Prepare and activate OSDs
28
Verify cluster health
29
Test the cluster
30
Install the Ceph object gateway
31
Configure the Ceph object gateway
32
Start working with buckets and objects
33
The next sections will walk you through these steps in detail.
34
35
h3. Step 1: Create Cluster Nodes
36
37
If you already have 5 virtual CentOS servers configured and talking to each other, you can skip this step. If not, you must first create the virtual servers that will make up your Ceph cluster. To do this:
38
Launch VirtualBox and use the Machine -> New menu to create a new virtual server.
39
40 2 Jessica Mack
!image1.jpg!
41
42 1 Jessica Mack
Keeping in mind that you will need 5 virtual servers running simultaneously, calculate the available RAM on the host system and set the server memory accordingly.
43
44 2 Jessica Mack
!image2.jpg!
45
46 1 Jessica Mack
Add a virtual hard drive of at least 10 GB.
47
48 2 Jessica Mack
!image3.jpg!
49
50 1 Jessica Mack
Ensure that you have an IDE controller with a virtual CD/DVD drive (to enable CentOS installation) and at least two network adapters, one NAT (to enable download of required software) and one bridged adapter or internal network adapter (for internal communication between the cluster nodes).
51
Once the server basics are defined, install CentOS to the server using the ISO installation image. Ensure that your kernel version is at least 3.10 or later.
52
Once the installation process is complete, log in to the server and configure the second network interface with a static IP address, by editing the appropriate template file in the /etc/sysconfig/network-scripts/ directory. Here's a sample of what the interface configuration might look like:
53
HWADDR=08:00:27:AE:14:41
54
TYPE=Ethernet
55
BOOTPROTO=static
56
DEFROUTE=yes
57
PEERDNS=yes
58
PEERROUTES=yes
59
IPV4_FAILURE_FATAL=no
60
IPV6INIT=yes
61
IPV6_AUTOCONF=yes
62
IPV6_DEFROUTE=yes
63
IPV6_PEERDNS=yes
64
IPV6_PEERROUTES=yes
65
IPV6_FAILURE_FATAL=no
66
NAME=enp0s8
67
UUID=5fc74119-1ab2-4c0c-9aa1-284fd484e6c6
68
ONBOOT=no
69
IPADDR=192.168.1.25
70
NETMASK=255.255.255.0
71
GATEWAY=192.168.1.1
72
DNS1=192.168.1.1
73
DNS2=8.8.8.8
74
Should any of the above steps be unfamiliar to you, refer to the VirtualBox manual, especially the VirtualBox networking guide, and to the networking section of the CentOS deployment guide.
75
Repeat this process until you have 5 virtual servers. Of these, identify one as the cluster administration node and assign it the hostname admin-node. The remaining servers may be identified with hostnames such as node1, node2, and so on. Here's an example of what the final cluster might look like (note that you should obviously modify the IP addresses to match your local network settings).
76
 
77
Server host name	IP address	
78
Purpose
79
admin-node	192.168.1.25	Administration node for cluster
80
node1	192.168.1.26	Monitor
81
node2	192.168.1.27	OSD daemon
82
node3	192.168.1.28	OSD daemon
83
node4	192.168.1.29	
84
Object gateway host / PHP client
85
Before proceeding to the next step, ensure that all the servers are accessible by pinging them using their host names. If you don't have a local DNS server, add the host names and IP addresses to each server's /etc/hosts file to ease network access.
86
87
h3. Step 2: Install the Ceph Deployment Toolkit
88
89
The next step is to install the Ceph deployment toolkit on the administration node. This toolkit will help install Ceph on the nodes in the cluster, as well as prepare and activate the cluster.
90
Log in to the administration node as the root user.
91
Add the package to the yum repository by creating a new file at /etc/yum.repos.d/ceph.repo with the following content:
92
[ceph-noarch]
93
name=Ceph noarch packages
94
baseurl=http://ceph.com/rpm-firefly/el7/noarch
95
enabled=1
96
gpgcheck=1
97
type=rpm-md
98
gpgkey=https://ceph.com/git/?p=ceph.git;a=b...ys/release.asc
99
Update the repository.
100
shell> yum update
101
Install the Ceph deployment toolkit.
102
shell> yum install ceph-deploy
103
 
104 2 Jessica Mack
!image4.jpg!
105 1 Jessica Mack
106
h3. Step 3: Configure Authentication between Cluster Nodes
107
108
Now, you need to create a ceph user on each server in the cluster, including the administration node. This user account will handle performing cluster-related operations on each node. Perform the following steps on each of the 5 virtual servers:
109
Log in as the root user.
110
Create a ceph user account.
111
shell> useradd ceph
112
shell> passwd ceph
113
Give the ceph user account root privileges with sudo.
114
shell> echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
115
shell> chmod 0440 /etc/sudoers.d/ceph
116
Disable 'requiretty' for the ceph user.
117
shell> sudo visudo
118
In the resulting file, locate the line containing
119
Defaults requiretty
120
and change it to read
121
Defaults:ceph !requiretty
122
Now, set up passphraseless SSH between the nodes:
123
Log in to the administration node as the ceph user.
124
Generate an SSH key for the administration node.
125
shell> ssh-keygen
126 2 Jessica Mack
127
!image52.jpg!
128
129 1 Jessica Mack
Copy the generated public key to the ceph user account of all the nodes in the cluster.
130
shell> ssh-copy-id ceph@node1
131
shell> ssh-copy-id ceph@node2
132
shell> ssh-copy-id ceph@node3
133
shell> ssh-copy-id ceph@node4
134
shell> ssh-copy-id ceph@admin-node
135 3 Jessica Mack
136
!image62.jpg!
137
138 1 Jessica Mack
Test that the ceph user on the administration node can log in to any other node as ceph using SSH and without providing a password.
139
shell> ssh ceph@node1
140 3 Jessica Mack
141
!image72.jpg!
142
143 1 Jessica Mack
Modify the administration node's SSH configuration file so that it can easily log in to each node as the ceph user. Create the /home/ceph/.ssh/config file with the following lines:
144
Host node1
145
  Hostname node1
146
  User ceph
147
Host node2
148
  Hostname node2
149
  User ceph
150
Host node3
151
  Hostname node3
152
  User ceph
153
Host node4
154
  Hostname node4
155
  User ceph
156
Host admin-node
157
  Hostname admin-node
158
  User ceph
159
Change the permissions of the /home/ceph/.ssh/config file.
160
shell> chmod 0400 ~/.ssh/config
161
Test that the ceph user on the administration node can log in to any other node using SSH and without providing a password or username.
162
shell> ssh node1
163 3 Jessica Mack
164
!image82.jpg!
165 1 Jessica Mack
 
166
Finally, create a directory on the administration node to store cluster information, such as configuration files and keyrings.
167
shell> mkdir my-cluster
168
shell> cd my-cluster
169
You're now ready to begin preparing and activating the cluster!
170
171
h3. Step 4: Configure and Activate a Cluster Monitor
172
173
A Ceph storage cluster consists of two types of daemons:
174
Monitors maintain copies of the cluster map
175
Object Storage Daemons (OSD) store data as objects on storage nodes
176
Apart from this, other actors in a Ceph storage cluster include metadata servers and clients such as Ceph block devices, Ceph object gateways or Ceph filesystems. Read more about Ceph’s architecture.
177
All the commands in this and subsequent sections are to be run when logged in as the ceph user on the administration node, from the my-cluster/ directory. Ensure that you are directly logged in as ceph and are not using root with su - ceph.
178
A minimal system will have at least one monitor and two OSD daemons for data replication.
179
Begin by setting up a Ceph monitor on node1 with the Ceph deployment toolkit.
180
shell> ceph-deploy new node1
181
This will define the name of the initial monitor node and create a default Ceph configuration file and monitor keyring in the current directory.
182 3 Jessica Mack
183
!image92.jpg!
184 1 Jessica Mack
 
185
Change the number of replicas in the Ceph configuration file at /home/ceph/my-cluster/ceph.conf from 3 to 2 so that Ceph can achieve a stable state with just two OSDs. Add the following line in the [global] section:
186
osd pool default size = 2
187
osd pool default min size = 2
188
In the same file, set the OSD journal size. A good general setting is 10 GB; however, since this is a simulation, you can use a smaller amount such as 4 GB. Add the following line in the [global] section:
189
osd journal size = 4000
190
In the same file, set the default number of placement groups for a pool. Since we’ll have less than 5 OSDs, 128 placement groups per pool should suffice. Add the following line in the [global] section:
191
osd pool default pg num = 128
192
Install Ceph on each node in the cluster, including the administration node.
193
shell> ceph-deploy install admin-node node1 node2 node3 node4
194
The Ceph deployment toolkit will now go to work installing Ceph on each node. Here's an example of what you will see during the installation process.
195 3 Jessica Mack
196
!image102.jpg!
197
198 1 Jessica Mack
Create the Ceph monitor on node1 and gather the initial keys.
199
shell> ceph-deploy mon create-initial node1
200
201 4 Jessica Mack
!image112.jpg!
202
203 1 Jessica Mack
h3. Step 5: Prepare and Activate OSDs
204
205
The next set is to prepare and activate Ceph OSDs. We'll need a minimum of 2 OSDs, and these should be set up on node2 and node3, as it's not recommended to mix monitors and OSD daemons on the same host. To begin, set up an OSD on node2 as follows:
206
Log into node2 as the ceph user.
207
shell> ssh node2
208
Create a directory for the OSD daemon.
209
shell> sudo mkdir /var/local/osd
210
Log out of node2. Then, from the administrative node, prepare and activate the OSD.
211
shell> ceph-deploy osd prepare node2:/var/local/osd
212 4 Jessica Mack
213
!image122.jpg!
214
215 1 Jessica Mack
shell> ceph-deploy osd activate node2:/var/local/osd
216 4 Jessica Mack
217
!image132.jpg!
218
219 1 Jessica Mack
Repeat the above steps for node3.
220
At this point, the OSD daemons have been created and the storage cluster is ready.
221
222
h3. Step 6: Verify Cluster Health
223
224
Copy the configuration file and admin keyring from the administration node to all the nodes in the cluster.
225
shell> ceph-deploy admin admin-node node1 node2 node3 node4
226 4 Jessica Mack
227
!image142.jpg!
228
229 1 Jessica Mack
Log in to each node as the ceph user and change the permissions of the admin keyring.
230
shell> ssh node1
231
shell> sudo chmod +r /etc/ceph/ceph.client.admin.keyring
232
You should now be able to check cluster health from any node in the cluster with the ceph status command. Ideally, you want to see the status active + clean, as that indicates the cluster is operating normally.
233
shell> ceph status
234
235 4 Jessica Mack
!image152.jpg!
236
237 1 Jessica Mack
h3. Step 7: Test the Cluster
238
239
You can now perform a simple test to see the distributed Ceph storage cluster in action, by writing a file on one node and retrieving it on another:
240
Log in to node1 as the ceph user.
241
shell> ssh node1
242
Create a new file with some dummy data.
243
shell> echo "Hello world" > /tmp/hello.txt
244
Data is stored in Ceph within storage pools, which are logical groups in which to organize your data. By default, a Ceph storage cluster has 3 pools - data, metadata and rbd - and it's also possible to create your own custom pools. In this case, copy the file to the data pool with the rados put command and assign it a name.
245
shell> rados put hello-object /tmp/hello.txt --pool data
246
To verify that the Ceph storage cluster stored the object:
247
Log in to node2 as the ceph user.
248
Check that the file exists in the cluster's data storage pool with the rados ls command.
249
shell> rados ls --pool data
250
Copy the file out of the storage cluster to a local directory with the rados get command and verify its contents
251
shell> rados get hello-object /tmp/hello.txt --pool data
252
shell> cat hello.txt
253 4 Jessica Mack
254
!image162.jpg!
255 1 Jessica Mack
256
h3. Step 8: Install the Ceph Object Gateway
257
258
Now that the cluster is operating, it’s time to do something with it. First, you must install and configure an Apache Web server with FastCGI on node4, as described below.
259
Log into node4 as the ceph user.
260
shell> ssh node4
261
Install Apache and FastCGI from the Ceph repositories. To do this, you need to first install the yum priorities plugin, then add the repositories to your yum repository list.
262
shell> sudo yum install yum-plugin-priorities
263
Edit the /etc/yum/pluginconf.d/priorities.conf file and ensure it looks like this:
264
[main]
265
enabled = 1
266
Create a file at /etc/yum.repos.d/ceph-apache.repo and fill it with the following content:
267
[apache2-ceph-noarch]
268
name=Apache noarch packages for Ceph
269
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
270
enabled=1
271
priority=2
272
gpgcheck=1
273
type=rpm-md
274
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
275
[apache2-ceph-source]
276
name=Apache source packages for Ceph
277
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
278
enabled=0
279
priority=2
280
gpgcheck=1
281
type=rpm-md
282
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
283
Create a file at /etc/yum.repos.d/ceph-fastcgi.repo and fill it with the following content:
284
[fastcgi-ceph-basearch]
285
name=FastCGI basearch packages for Ceph
286
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
287
enabled=1
288
priority=2
289
gpgcheck=1
290
type=rpm-md
291
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
292
[fastcgi-ceph-noarch]
293
name=FastCGI noarch packages for Ceph
294
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
295
enabled=1
296
priority=2
297
gpgcheck=1
298
type=rpm-md
299
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
300
[fastcgi-ceph-source]
301
name=FastCGI source packages for Ceph
302
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
303
enabled=0
304
priority=2
305
gpgcheck=1
306
type=rpm-md
307
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
308
Update the repository and install Apache and FastCGI.
309
shell> sudo yum update
310
shell> sudo yum install httpd mod_fastcgi
311
Edit the /etc/httpd/conf/httpd.conf file and modify the ServerName directive to reflect the server's host name. Uncomment the line if needed.
312
ServerName node4
313
Review the files in the /etc/httpd/conf.modules.d/* directory to ensure that Apache's URL rewriting and FastCGI modules are enabled. You should find the following entries in the files:
314
LoadModule rewrite_module modules/mod_rewrite.so
315
LoadModule fastcgi_module modules/mod_fastcgi.so
316
In case these entries don't exist, add them to the end of the /etc/httpd/conf/httpd.conf file.
317
Restart Apache.
318
shell> sudo service httpd restart
319
Amazon S3 lets you refer to buckets using subdomains, such as http://mybucket.s3.amazonaws.com. You can also accomplish this with Ceph, but you must first install a local DNS server like dnsmasq and add support for wildcard subdomains. Follow these steps:
320
Log into node4 as the ceph user.
321
shell> ssh node4
322
Install dnsmasq.
323
shell> yum install dnsmasq
324
Edit the dnsmasq configuration file at /etc/dnsmasq.conf and add the following line to the end of the file:
325
address=/.node4/192.168.1.29
326
Save the file and restart dnsmasq.
327
shell> sudo service dnsmasq restart
328
If necessary, update the /etc/resolv.conf file on the client host so that it knows about the new DNS server.
329
nameserver 192.168.1.29
330
You should now be able to successfully ping any subdomain of *.node4, such as mybucket.node4 or example.node4, as shown in the image below.
331
 
332
image17.png
333
 
334
TIP: If you're not able to configure wildcard subdomains, you can also simply decide a list of subdomains you wish to use and then add them as static entries to the client system's /etc/hosts file. Ensure that the entries resolve to the node4 virtual host.
335
The final step is to install radosgw on node4:
336
shell> ssh node4
337
shell> sudo yum install ceph-radosgw
338
At this point, you have a Web server running with the Ceph object gateway and FastCGI support, and subdomains that resolve to the object gateway host.
339
340
h3. Step 9: Configure the Ceph Object Gateway
341
342
The next step is to configure the Ceph Object Gateway daemon. Follow these steps:
343
Log into the administration node as the ceph user.
344
shell> ssh admin-node
345
Create a keyring for the gateway.
346
shell> sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
347
shell> sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
348
Generate a user name and key to use when accessing the gateway. For this example, the user name is client.radosgw.gateway.
349
shell> sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n  client.radosgw.gateway --gen-key
350
Add read and write capabilities to the new key:
351
shell> sudo ceph-authtool -n  client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
352
Add the new key to the storage cluster and distribute it to the object gateway node.
353
shell> sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
354
shell> sudo scp /etc/ceph/ceph.client.radosgw.keyring  ceph@node4:/home/ceph
355
shell> ssh node4
356
shell> sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
357
shell> exit
358
This process should also have created a number of storage pools for the gateway. You can verify this by running the following command and verifying that the output includes various .rgw pools.
359
shell> rados lspools
360
image18.png
361
Change to your cluster configuration directory.
362
shell> cd ~/my-cluster
363
Edit the Ceph configuration file at ~/my-cluster/ceph/ceph.conf and add a new [client.radosgw.gateway] section to it, as below:
364
[client.radosgw.gateway]
365
host = node4
366
keyring = /etc/ceph/ceph.client.radosgw.keyring
367
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
368
log file = /var/log/radosgw/client.radosgw.gateway.log
369
rgw dns name = node4
370
rgw print continue = false
371
Transmit the new Ceph configuration file to all the other nodes in the cluster.
372
shell> ceph-deploy config push admin-node node1 node2 node3 node4
373
Log into node4 as the ceph user.
374
shell> ssh node4
375
Add a Ceph object gateway script, by creating a file at /var/www/html/s3gw.fcgi with the following content:
376
#!/bin/sh
377
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
378
Change the permissions of the script to make it executable.
379
shell> sudo chmod +x /var/www/html/s3gw.fcgi
380
Create a data directory for the radosgw daemon.
381
shell> sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
382
Add a gateway configuration file, by creating a file at /etc/httpd/conf.d/rgw.conf and filling it with the following content:
383
FastCgiExternalServer /var/www/html/s3gw.fcgi -socket /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
384
<VirtualHost *:80>
385
    ServerName node4
386
    ServerAlias *.node4
387
    ServerAdmin admin@localhost
388
    DocumentRoot /var/www/html
389
    RewriteEngine On
390
    RewriteRule  ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
391
    <IfModule mod_fastcgi.c>
392
       <Directory /var/www/html>
393
            Options +ExecCGI
394
            AllowOverride All
395
            SetHandler fastcgi-script
396
            Order allow,deny
397
            Allow from all
398
            AuthBasicAuthoritative Off
399
        </Directory>
400
    </IfModule>
401
    AllowEncodedSlashes On
402
    ErrorLog /var/log/httpd/error.log
403
    CustomLog /var/log/httpd/access.log combined
404
    ServerSignature Off
405
</VirtualHost>
406
<VirtualHost *:443>
407
    ServerName node4
408
    ServerAlias *.node4
409
    ServerAdmin admin@localhost
410
    DocumentRoot /var/www/html
411
    RewriteEngine On
412
    RewriteRule  ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
413
    <IfModule mod_fastcgi.c>
414
       <Directory /var/www/html>
415
            Options +ExecCGI
416
            AllowOverride All
417
            SetHandler fastcgi-script
418
            Order allow,deny
419
            Allow from all
420
            AuthBasicAuthoritative Off
421
        </Directory>
422
    </IfModule>
423
    AllowEncodedSlashes On
424
    ErrorLog /var/log/httpd/error.log
425
    CustomLog /var/log/httpd/access.log combined
426
    ServerSignature Off
427
  SSLEngine on
428
  SSLCertificateFile /etc/apache2/ssl/apache.crt
429
  SSLCertificateKeyFile /etc/apache2/ssl/apache.key
430
  SetEnv SERVER_PORT_SECURE 443
431
</VirtualHost>
432
Edit the /etc/httpd/conf.d/fastcgi.conf file and ensure that the line referencing the FastCgiWrapper looks like this:
433
FastCgiWrapper off
434
Restart the Apache server, followed by the radosgw daemon.
435
shell> sudo service httpd restart
436
shell> sudo /etc/init.d/ceph-radosgw restart
437
You can quickly test that the object gateway is running by sending an HTTP GET request to the Web server, as shown below:
438
image19.png
439
At this point, your Ceph object gateway is running and you can begin using it.
440
441
h3. Step 10: Start Working with Buckets and Objects
442
443
Before you can begin using the Ceph object gateway, you must create a user account.
444
Log in to node4 as the ceph user.
445
shell> ssh admin-node
446
Create a new user account using the radosgw-admin command. In this example, the user is named 'john'.
447
shell> radosgw-admin user create --uid=john --display-name="Example User"
448
Here's an example of what you should see. Note the access key and secret key in the output, as you will need this to access the object gateway from another client.
449
image20.png
450
You can also verify that the user was created with the following command:
451
shell> radosgw-admin user info --uid=john
452
While you can interact with the object gateway directly over HTTP, by sending authenticated GET, PUT and DELETE requests to the gateway endpoints, an easier way is with Amazon's AWS SDK. This SDK includes classes and constructs to help you work with buckets and objects in Amazon S3. Since the Ceph object gateway is S3-compatible, you can use the same SDK to interact with it as well.
453
The AWS SDK is available for multiple programming languages. In the examples that follow, I'll use the AWS SDK for PHP, but you will find code examples for other languages as well on the AWS developer website.
454
Log in to node4 (which will now also double as the client node) as the root user and install PHP and related tools.
455
shell> sudo yum install php curl php-curl
456
Create a working directory for your PHP files. Download Composer, the PHP dependency manager, into this directory.
457
shell> cd /tmp
458
shell> mkdir ceph
459
shell> cd ceph
460
shell> curl -sS https://getcomposer.org/installer | php
461
Create a composer.json file in the working directory and fill it with the following content:
462
{
463
    "require": {
464
        "aws/aws-sdk-php": "2.*"
465
    }
466
}
467
Download the AWS SDK for PHP and related dependencies using Composer:
468
shell> cd /tmp/ceph
469
shell> php composer.phar install
470
You can now begin interacting with your object gateway using PHP. For example, here's a simple PHP script to create a new bucket:
471
<?php
472
// create-bucket.php
473
// autoload files
474
require 'vendor/autoload.php';
475
use Aws\S3\S3Client;
476
// instantiate S3 client
477
$s3 = S3Client::factory(array(
478
        'key' => 'YOUR_ACCESS_KEY',
479
        'secret' => 'YOUR_SECRET_KEY',
480
        'endpoint' => 'http://node4'
481
));
482
// create bucket
483
try {
484
  $s3->createBucket(array('Bucket' => 'mybucket'));
485
  echo "Bucket created \n";
486
} catch (Aws\S3\Exception\S3Exception $e) {
487
  echo "Request failed: $e";
488
}
489
This script begins by initializing the Composer auto-loader and an instance of the S3Client object. The object is provided with the access key and secret for the user created earlier, and a custom endpoint points to the object gateway Web server.
490
The S3Client object provides a number of methods to create and manage buckets and objects. One of these is the createBucket() method, which accepts a bucket name and generates the necessary PUT request to create a new bucket in the object gateway.
491
You can run this script at the console as follows:
492
shell> php create-bucket.php
493
Here's an example of what the output might look like:
494
image21.png
495
You can also create a bucket and then add a file to it as an object, using the client object's upload() method. Here's an example:
496
<?php
497
// create-bucket-object.php
498
// autoload files
499
require 'vendor/autoload.php';
500
use Aws\S3\S3Client;
501
// instantiate S3 client
502
$s3 = S3Client::factory(array(
503
        'key' => 'YOUR_ACCESS_KEY',
504
        'secret' => 'YOUR_SECRET_KEY',
505
        'endpoint' => 'http://node4'
506
));
507
// create bucket and upload file to it
508
try {
509
  $s3->createBucket(array('Bucket' => 'myotherbucket'));
510
  $s3->upload('myotherbucket', 'test.tgz', file_get_contents('/tmp/test.tgz'), 'public-read');
511
  echo 'Bucket and object created';     
512
} catch (Aws\S3\Exception\S3Exception $e) {
513
  echo "Request failed: $e";
514
}
515
Of course, you can also list all the buckets and objects available to the authenticated user with the listBuckets() and listObjects() methods:
516
<?php
517
// list-bucket-contents.php
518
// autoload files
519
require 'vendor/autoload.php';
520
use Aws\S3\S3Client;
521
// instantiate S3 client
522
$s3 = S3Client::factory(array(
523
        'key' => 'YOUR_ACCESS_KEY',
524
        'secret' => 'YOUR_SECRET_KEY',
525
        'endpoint' => 'http://node4'
526
));
527
// create bucket and upload file to it
528
try {
529
  $bucketsColl = $s3->listBuckets();
530
  foreach ($bucketsColl['Buckets'] as $bucket) {
531
    echo $bucket['Name'] . "\n";
532
    $objColl = $s3->listObjects(array('Bucket' => $bucket['Name']));
533
     if ($objColl['Contents']) {
534
        foreach ($objColl['Contents'] as $obj) {
535
          echo '- ' . $obj['Key'] . "\n";
536
        }
537
     }
538
  }
539
} catch (Aws\S3\Exception\S3Exception $e) {
540
  echo "Request failed: $e";
541
}
542
Here's an example of what the output might look like:
543
image22.png
544
Of course, you can do a lot more with the AWS SDK for PHP. Refer to the reference documentation for a complete list of methods and example code.
545
546
h3. Conclusion
547
548
As this tutorial has illustrated, Ceph makes it easy to set up a standards-compliant object gateway for your applications or users, with all the benefits of a resilient, infinitely scalable underlying storage cluster.
549
The simple object gateway you created here with VirtualBox is just the tip of the iceberg: you can transition your object gateway to the cloud and run it in federated mode across regions and zones for even greater flexibility, and because the Ceph object gateway is also Swift-compliant, you can maximize compatibility for OpenStack users without any changes to your existing infrastructure. And of course, you can also use the underlying object storage cluster for fault-tolerant Ceph block devices or the POSIX-compliant CephFS filesystem.
550
The bottom line: Ceph's unique architecture gives you improved performance and flexibility without any loss in reliability and security. And it's open source, so you can experiment with it, improve it and use it without worrying about vendor lock-in. You can't get any better than that!
551
552
h3. Read More
553
554
"Introduction to Ceph":http://ceph.com/docs/master/start/intro/
555
"Ceph Architecture":http://ceph.com/docs/master/architecture/
556
"Getting Started With Ceph":http://www.inktank.com/resource/getting-started-with-ceph-miroslav-klivansky/
557
"Introduction to Ceph & OpenStack":http://www.inktank.com/resource/introduction-to-ceph-openstack-miroslav-klivansky/    
558
"Managing A Distributed Storage System At Scale":http://www.inktank.com/resource/managing-a-distributed-storage-system-at-scale-sage-weil/
559
"Scaling Storage With Ceph":http://www.inktank.com/resource/scaling-storage-with-ceph-ross-turk/
560
"Ceph API Documentation":http://ceph.com/docs/master/api/