Project

General

Profile

Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox » History » Version 1

Jessica Mack, 06/22/2015 03:36 AM

1 1 Jessica Mack
h1. Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox
2
3
{{toc}}
4
5
h3. Introducing the Ceph Object Gateway
6
7
Ceph is a highly reliable distributed storage system, with self-healing and self-managing characteristics. One of its unique characteristics is its unified storage interface, which supports object storage, block device storage and file system storage all in the same Ceph cluster. Of course, it's also open source, so you can freely download and experiment with it at your leisure.
8
The Ceph Object Gateway provides a way to host scalable data storage "buckets", similar to those provided by Amazon Simple Storage Service (Amazon S3) and OpenStack Swift. These objects are accessible via a REST API, making them ideal for cloud-based applications, big data storage and processing, and many other use cases. And because the underlying cluster infrastructure is managed by Ceph, fault-tolerance and scalability are guaranteed.
9
Setting up a Ceph object gateway can be a little complex, especially if you're unfamiliar with how scalable object storage works. That's where this tutorial comes in. Over the next few pages, I'll walk you through the process of setting up a Ceph-based object gateway and adding data to it. We'll set up the cluster using VirtualBox, so you'll get a chance to see Ceph's object storage features in action in a "real" environment where you have total control, but which doesn't cost you anything to run or scale out with new nodes.
10
Sounds good? Keep reading.
11
12
h3. Assumptions and Requirements
13
14
For this tutorial, I'll be using VirtualBox, which provides an easy way to set up independent virtual servers, with CentOS as the operating system for the virtual servers. VirtualBox is available for Windows, Linux, Macintosh, and Solaris hosts. I'll make the following assumptions:
15
You have a working knowledge of CentOS, VirtualBox and VirtualBox networking.
16
You have downloaded and installed the latest version of VirtualBox.
17
You have either already configured 5 virtual CentOS servers, or you have downloaded an ISO installation image for the latest version of CentOS (CentOS 7.0 at the time of writing). These servers must be using kernel version 3.10 or later
18
You're familiar with installing software using the yum, the CentOS package manager.
19
You’re familiar with SSH-based authentication.
20
You're familiar with object storage in the cloud.
21
In case you’re not familiar with the above topics, look in the “Read More” section at the end of this tutorial, which has links to relevant guides.
22
To set up a Ceph storage cluster with VirtualBox, here are the steps you'll follow:
23
Create cluster nodes
24
Install the Ceph deployment toolkit
25
Configure authentication between cluster nodes
26
Configure and activate a cluster monitor
27
Prepare and activate OSDs
28
Verify cluster health
29
Test the cluster
30
Install the Ceph object gateway
31
Configure the Ceph object gateway
32
Start working with buckets and objects
33
The next sections will walk you through these steps in detail.
34
35
h3. Step 1: Create Cluster Nodes
36
37
If you already have 5 virtual CentOS servers configured and talking to each other, you can skip this step. If not, you must first create the virtual servers that will make up your Ceph cluster. To do this:
38
Launch VirtualBox and use the Machine -> New menu to create a new virtual server.
39
40
image1.jpg
41
Keeping in mind that you will need 5 virtual servers running simultaneously, calculate the available RAM on the host system and set the server memory accordingly.
42
43
image2.jpg
44
Add a virtual hard drive of at least 10 GB.
45
46
image3.jpg
47
Ensure that you have an IDE controller with a virtual CD/DVD drive (to enable CentOS installation) and at least two network adapters, one NAT (to enable download of required software) and one bridged adapter or internal network adapter (for internal communication between the cluster nodes).
48
Once the server basics are defined, install CentOS to the server using the ISO installation image. Ensure that your kernel version is at least 3.10 or later.
49
Once the installation process is complete, log in to the server and configure the second network interface with a static IP address, by editing the appropriate template file in the /etc/sysconfig/network-scripts/ directory. Here's a sample of what the interface configuration might look like:
50
HWADDR=08:00:27:AE:14:41
51
TYPE=Ethernet
52
BOOTPROTO=static
53
DEFROUTE=yes
54
PEERDNS=yes
55
PEERROUTES=yes
56
IPV4_FAILURE_FATAL=no
57
IPV6INIT=yes
58
IPV6_AUTOCONF=yes
59
IPV6_DEFROUTE=yes
60
IPV6_PEERDNS=yes
61
IPV6_PEERROUTES=yes
62
IPV6_FAILURE_FATAL=no
63
NAME=enp0s8
64
UUID=5fc74119-1ab2-4c0c-9aa1-284fd484e6c6
65
ONBOOT=no
66
IPADDR=192.168.1.25
67
NETMASK=255.255.255.0
68
GATEWAY=192.168.1.1
69
DNS1=192.168.1.1
70
DNS2=8.8.8.8
71
Should any of the above steps be unfamiliar to you, refer to the VirtualBox manual, especially the VirtualBox networking guide, and to the networking section of the CentOS deployment guide.
72
Repeat this process until you have 5 virtual servers. Of these, identify one as the cluster administration node and assign it the hostname admin-node. The remaining servers may be identified with hostnames such as node1, node2, and so on. Here's an example of what the final cluster might look like (note that you should obviously modify the IP addresses to match your local network settings).
73
 
74
Server host name	IP address	
75
Purpose
76
admin-node	192.168.1.25	Administration node for cluster
77
node1	192.168.1.26	Monitor
78
node2	192.168.1.27	OSD daemon
79
node3	192.168.1.28	OSD daemon
80
node4	192.168.1.29	
81
Object gateway host / PHP client
82
Before proceeding to the next step, ensure that all the servers are accessible by pinging them using their host names. If you don't have a local DNS server, add the host names and IP addresses to each server's /etc/hosts file to ease network access.
83
84
h3. Step 2: Install the Ceph Deployment Toolkit
85
86
The next step is to install the Ceph deployment toolkit on the administration node. This toolkit will help install Ceph on the nodes in the cluster, as well as prepare and activate the cluster.
87
Log in to the administration node as the root user.
88
Add the package to the yum repository by creating a new file at /etc/yum.repos.d/ceph.repo with the following content:
89
[ceph-noarch]
90
name=Ceph noarch packages
91
baseurl=http://ceph.com/rpm-firefly/el7/noarch
92
enabled=1
93
gpgcheck=1
94
type=rpm-md
95
gpgkey=https://ceph.com/git/?p=ceph.git;a=b...ys/release.asc
96
Update the repository.
97
shell> yum update
98
Install the Ceph deployment toolkit.
99
shell> yum install ceph-deploy
100
 
101
image4.jpg
102
103
h3. Step 3: Configure Authentication between Cluster Nodes
104
105
Now, you need to create a ceph user on each server in the cluster, including the administration node. This user account will handle performing cluster-related operations on each node. Perform the following steps on each of the 5 virtual servers:
106
Log in as the root user.
107
Create a ceph user account.
108
shell> useradd ceph
109
shell> passwd ceph
110
Give the ceph user account root privileges with sudo.
111
shell> echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
112
shell> chmod 0440 /etc/sudoers.d/ceph
113
Disable 'requiretty' for the ceph user.
114
shell> sudo visudo
115
In the resulting file, locate the line containing
116
Defaults requiretty
117
and change it to read
118
Defaults:ceph !requiretty
119
Now, set up passphraseless SSH between the nodes:
120
Log in to the administration node as the ceph user.
121
Generate an SSH key for the administration node.
122
shell> ssh-keygen
123
image5.jpg
124
Copy the generated public key to the ceph user account of all the nodes in the cluster.
125
shell> ssh-copy-id ceph@node1
126
shell> ssh-copy-id ceph@node2
127
shell> ssh-copy-id ceph@node3
128
shell> ssh-copy-id ceph@node4
129
shell> ssh-copy-id ceph@admin-node
130
image6.jpg
131
Test that the ceph user on the administration node can log in to any other node as ceph using SSH and without providing a password.
132
shell> ssh ceph@node1
133
image7.jpg
134
Modify the administration node's SSH configuration file so that it can easily log in to each node as the ceph user. Create the /home/ceph/.ssh/config file with the following lines:
135
Host node1
136
  Hostname node1
137
  User ceph
138
Host node2
139
  Hostname node2
140
  User ceph
141
Host node3
142
  Hostname node3
143
  User ceph
144
Host node4
145
  Hostname node4
146
  User ceph
147
Host admin-node
148
  Hostname admin-node
149
  User ceph
150
Change the permissions of the /home/ceph/.ssh/config file.
151
shell> chmod 0400 ~/.ssh/config
152
Test that the ceph user on the administration node can log in to any other node using SSH and without providing a password or username.
153
shell> ssh node1
154
image8.jpg
155
 
156
Finally, create a directory on the administration node to store cluster information, such as configuration files and keyrings.
157
shell> mkdir my-cluster
158
shell> cd my-cluster
159
You're now ready to begin preparing and activating the cluster!
160
161
h3. Step 4: Configure and Activate a Cluster Monitor
162
163
A Ceph storage cluster consists of two types of daemons:
164
Monitors maintain copies of the cluster map
165
Object Storage Daemons (OSD) store data as objects on storage nodes
166
Apart from this, other actors in a Ceph storage cluster include metadata servers and clients such as Ceph block devices, Ceph object gateways or Ceph filesystems. Read more about Ceph’s architecture.
167
All the commands in this and subsequent sections are to be run when logged in as the ceph user on the administration node, from the my-cluster/ directory. Ensure that you are directly logged in as ceph and are not using root with su - ceph.
168
A minimal system will have at least one monitor and two OSD daemons for data replication.
169
Begin by setting up a Ceph monitor on node1 with the Ceph deployment toolkit.
170
shell> ceph-deploy new node1
171
This will define the name of the initial monitor node and create a default Ceph configuration file and monitor keyring in the current directory.
172
image9.jpg
173
 
174
Change the number of replicas in the Ceph configuration file at /home/ceph/my-cluster/ceph.conf from 3 to 2 so that Ceph can achieve a stable state with just two OSDs. Add the following line in the [global] section:
175
osd pool default size = 2
176
osd pool default min size = 2
177
In the same file, set the OSD journal size. A good general setting is 10 GB; however, since this is a simulation, you can use a smaller amount such as 4 GB. Add the following line in the [global] section:
178
osd journal size = 4000
179
In the same file, set the default number of placement groups for a pool. Since we’ll have less than 5 OSDs, 128 placement groups per pool should suffice. Add the following line in the [global] section:
180
osd pool default pg num = 128
181
Install Ceph on each node in the cluster, including the administration node.
182
shell> ceph-deploy install admin-node node1 node2 node3 node4
183
The Ceph deployment toolkit will now go to work installing Ceph on each node. Here's an example of what you will see during the installation process.
184
image10.jpg
185
Create the Ceph monitor on node1 and gather the initial keys.
186
shell> ceph-deploy mon create-initial node1
187
image11.jpg
188
189
h3. Step 5: Prepare and Activate OSDs
190
191
The next set is to prepare and activate Ceph OSDs. We'll need a minimum of 2 OSDs, and these should be set up on node2 and node3, as it's not recommended to mix monitors and OSD daemons on the same host. To begin, set up an OSD on node2 as follows:
192
Log into node2 as the ceph user.
193
shell> ssh node2
194
Create a directory for the OSD daemon.
195
shell> sudo mkdir /var/local/osd
196
Log out of node2. Then, from the administrative node, prepare and activate the OSD.
197
shell> ceph-deploy osd prepare node2:/var/local/osd
198
image12.jpg
199
shell> ceph-deploy osd activate node2:/var/local/osd
200
image13.jpg
201
Repeat the above steps for node3.
202
At this point, the OSD daemons have been created and the storage cluster is ready.
203
204
h3. Step 6: Verify Cluster Health
205
206
Copy the configuration file and admin keyring from the administration node to all the nodes in the cluster.
207
shell> ceph-deploy admin admin-node node1 node2 node3 node4
208
image14.jpg
209
Log in to each node as the ceph user and change the permissions of the admin keyring.
210
shell> ssh node1
211
shell> sudo chmod +r /etc/ceph/ceph.client.admin.keyring
212
You should now be able to check cluster health from any node in the cluster with the ceph status command. Ideally, you want to see the status active + clean, as that indicates the cluster is operating normally.
213
shell> ceph status
214
image15.jpg
215
216
h3. Step 7: Test the Cluster
217
218
You can now perform a simple test to see the distributed Ceph storage cluster in action, by writing a file on one node and retrieving it on another:
219
Log in to node1 as the ceph user.
220
shell> ssh node1
221
Create a new file with some dummy data.
222
shell> echo "Hello world" > /tmp/hello.txt
223
Data is stored in Ceph within storage pools, which are logical groups in which to organize your data. By default, a Ceph storage cluster has 3 pools - data, metadata and rbd - and it's also possible to create your own custom pools. In this case, copy the file to the data pool with the rados put command and assign it a name.
224
shell> rados put hello-object /tmp/hello.txt --pool data
225
To verify that the Ceph storage cluster stored the object:
226
Log in to node2 as the ceph user.
227
Check that the file exists in the cluster's data storage pool with the rados ls command.
228
shell> rados ls --pool data
229
Copy the file out of the storage cluster to a local directory with the rados get command and verify its contents
230
shell> rados get hello-object /tmp/hello.txt --pool data
231
shell> cat hello.txt
232
image16.jpg
233
234
h3. Step 8: Install the Ceph Object Gateway
235
236
Now that the cluster is operating, it’s time to do something with it. First, you must install and configure an Apache Web server with FastCGI on node4, as described below.
237
Log into node4 as the ceph user.
238
shell> ssh node4
239
Install Apache and FastCGI from the Ceph repositories. To do this, you need to first install the yum priorities plugin, then add the repositories to your yum repository list.
240
shell> sudo yum install yum-plugin-priorities
241
Edit the /etc/yum/pluginconf.d/priorities.conf file and ensure it looks like this:
242
[main]
243
enabled = 1
244
Create a file at /etc/yum.repos.d/ceph-apache.repo and fill it with the following content:
245
[apache2-ceph-noarch]
246
name=Apache noarch packages for Ceph
247
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
248
enabled=1
249
priority=2
250
gpgcheck=1
251
type=rpm-md
252
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
253
[apache2-ceph-source]
254
name=Apache source packages for Ceph
255
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
256
enabled=0
257
priority=2
258
gpgcheck=1
259
type=rpm-md
260
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
261
Create a file at /etc/yum.repos.d/ceph-fastcgi.repo and fill it with the following content:
262
[fastcgi-ceph-basearch]
263
name=FastCGI basearch packages for Ceph
264
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
265
enabled=1
266
priority=2
267
gpgcheck=1
268
type=rpm-md
269
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
270
[fastcgi-ceph-noarch]
271
name=FastCGI noarch packages for Ceph
272
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
273
enabled=1
274
priority=2
275
gpgcheck=1
276
type=rpm-md
277
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
278
[fastcgi-ceph-source]
279
name=FastCGI source packages for Ceph
280
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
281
enabled=0
282
priority=2
283
gpgcheck=1
284
type=rpm-md
285
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
286
Update the repository and install Apache and FastCGI.
287
shell> sudo yum update
288
shell> sudo yum install httpd mod_fastcgi
289
Edit the /etc/httpd/conf/httpd.conf file and modify the ServerName directive to reflect the server's host name. Uncomment the line if needed.
290
ServerName node4
291
Review the files in the /etc/httpd/conf.modules.d/* directory to ensure that Apache's URL rewriting and FastCGI modules are enabled. You should find the following entries in the files:
292
LoadModule rewrite_module modules/mod_rewrite.so
293
LoadModule fastcgi_module modules/mod_fastcgi.so
294
In case these entries don't exist, add them to the end of the /etc/httpd/conf/httpd.conf file.
295
Restart Apache.
296
shell> sudo service httpd restart
297
Amazon S3 lets you refer to buckets using subdomains, such as http://mybucket.s3.amazonaws.com. You can also accomplish this with Ceph, but you must first install a local DNS server like dnsmasq and add support for wildcard subdomains. Follow these steps:
298
Log into node4 as the ceph user.
299
shell> ssh node4
300
Install dnsmasq.
301
shell> yum install dnsmasq
302
Edit the dnsmasq configuration file at /etc/dnsmasq.conf and add the following line to the end of the file:
303
address=/.node4/192.168.1.29
304
Save the file and restart dnsmasq.
305
shell> sudo service dnsmasq restart
306
If necessary, update the /etc/resolv.conf file on the client host so that it knows about the new DNS server.
307
nameserver 192.168.1.29
308
You should now be able to successfully ping any subdomain of *.node4, such as mybucket.node4 or example.node4, as shown in the image below.
309
 
310
image17.png
311
 
312
TIP: If you're not able to configure wildcard subdomains, you can also simply decide a list of subdomains you wish to use and then add them as static entries to the client system's /etc/hosts file. Ensure that the entries resolve to the node4 virtual host.
313
The final step is to install radosgw on node4:
314
shell> ssh node4
315
shell> sudo yum install ceph-radosgw
316
At this point, you have a Web server running with the Ceph object gateway and FastCGI support, and subdomains that resolve to the object gateway host.
317
318
h3. Step 9: Configure the Ceph Object Gateway
319
320
The next step is to configure the Ceph Object Gateway daemon. Follow these steps:
321
Log into the administration node as the ceph user.
322
shell> ssh admin-node
323
Create a keyring for the gateway.
324
shell> sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
325
shell> sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
326
Generate a user name and key to use when accessing the gateway. For this example, the user name is client.radosgw.gateway.
327
shell> sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n  client.radosgw.gateway --gen-key
328
Add read and write capabilities to the new key:
329
shell> sudo ceph-authtool -n  client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
330
Add the new key to the storage cluster and distribute it to the object gateway node.
331
shell> sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
332
shell> sudo scp /etc/ceph/ceph.client.radosgw.keyring  ceph@node4:/home/ceph
333
shell> ssh node4
334
shell> sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
335
shell> exit
336
This process should also have created a number of storage pools for the gateway. You can verify this by running the following command and verifying that the output includes various .rgw pools.
337
shell> rados lspools
338
image18.png
339
Change to your cluster configuration directory.
340
shell> cd ~/my-cluster
341
Edit the Ceph configuration file at ~/my-cluster/ceph/ceph.conf and add a new [client.radosgw.gateway] section to it, as below:
342
[client.radosgw.gateway]
343
host = node4
344
keyring = /etc/ceph/ceph.client.radosgw.keyring
345
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
346
log file = /var/log/radosgw/client.radosgw.gateway.log
347
rgw dns name = node4
348
rgw print continue = false
349
Transmit the new Ceph configuration file to all the other nodes in the cluster.
350
shell> ceph-deploy config push admin-node node1 node2 node3 node4
351
Log into node4 as the ceph user.
352
shell> ssh node4
353
Add a Ceph object gateway script, by creating a file at /var/www/html/s3gw.fcgi with the following content:
354
#!/bin/sh
355
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
356
Change the permissions of the script to make it executable.
357
shell> sudo chmod +x /var/www/html/s3gw.fcgi
358
Create a data directory for the radosgw daemon.
359
shell> sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
360
Add a gateway configuration file, by creating a file at /etc/httpd/conf.d/rgw.conf and filling it with the following content:
361
FastCgiExternalServer /var/www/html/s3gw.fcgi -socket /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
362
<VirtualHost *:80>
363
    ServerName node4
364
    ServerAlias *.node4
365
    ServerAdmin admin@localhost
366
    DocumentRoot /var/www/html
367
    RewriteEngine On
368
    RewriteRule  ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
369
    <IfModule mod_fastcgi.c>
370
       <Directory /var/www/html>
371
            Options +ExecCGI
372
            AllowOverride All
373
            SetHandler fastcgi-script
374
            Order allow,deny
375
            Allow from all
376
            AuthBasicAuthoritative Off
377
        </Directory>
378
    </IfModule>
379
    AllowEncodedSlashes On
380
    ErrorLog /var/log/httpd/error.log
381
    CustomLog /var/log/httpd/access.log combined
382
    ServerSignature Off
383
</VirtualHost>
384
<VirtualHost *:443>
385
    ServerName node4
386
    ServerAlias *.node4
387
    ServerAdmin admin@localhost
388
    DocumentRoot /var/www/html
389
    RewriteEngine On
390
    RewriteRule  ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
391
    <IfModule mod_fastcgi.c>
392
       <Directory /var/www/html>
393
            Options +ExecCGI
394
            AllowOverride All
395
            SetHandler fastcgi-script
396
            Order allow,deny
397
            Allow from all
398
            AuthBasicAuthoritative Off
399
        </Directory>
400
    </IfModule>
401
    AllowEncodedSlashes On
402
    ErrorLog /var/log/httpd/error.log
403
    CustomLog /var/log/httpd/access.log combined
404
    ServerSignature Off
405
  SSLEngine on
406
  SSLCertificateFile /etc/apache2/ssl/apache.crt
407
  SSLCertificateKeyFile /etc/apache2/ssl/apache.key
408
  SetEnv SERVER_PORT_SECURE 443
409
</VirtualHost>
410
Edit the /etc/httpd/conf.d/fastcgi.conf file and ensure that the line referencing the FastCgiWrapper looks like this:
411
FastCgiWrapper off
412
Restart the Apache server, followed by the radosgw daemon.
413
shell> sudo service httpd restart
414
shell> sudo /etc/init.d/ceph-radosgw restart
415
You can quickly test that the object gateway is running by sending an HTTP GET request to the Web server, as shown below:
416
image19.png
417
At this point, your Ceph object gateway is running and you can begin using it.
418
419
h3. Step 10: Start Working with Buckets and Objects
420
421
Before you can begin using the Ceph object gateway, you must create a user account.
422
Log in to node4 as the ceph user.
423
shell> ssh admin-node
424
Create a new user account using the radosgw-admin command. In this example, the user is named 'john'.
425
shell> radosgw-admin user create --uid=john --display-name="Example User"
426
Here's an example of what you should see. Note the access key and secret key in the output, as you will need this to access the object gateway from another client.
427
image20.png
428
You can also verify that the user was created with the following command:
429
shell> radosgw-admin user info --uid=john
430
While you can interact with the object gateway directly over HTTP, by sending authenticated GET, PUT and DELETE requests to the gateway endpoints, an easier way is with Amazon's AWS SDK. This SDK includes classes and constructs to help you work with buckets and objects in Amazon S3. Since the Ceph object gateway is S3-compatible, you can use the same SDK to interact with it as well.
431
The AWS SDK is available for multiple programming languages. In the examples that follow, I'll use the AWS SDK for PHP, but you will find code examples for other languages as well on the AWS developer website.
432
Log in to node4 (which will now also double as the client node) as the root user and install PHP and related tools.
433
shell> sudo yum install php curl php-curl
434
Create a working directory for your PHP files. Download Composer, the PHP dependency manager, into this directory.
435
shell> cd /tmp
436
shell> mkdir ceph
437
shell> cd ceph
438
shell> curl -sS https://getcomposer.org/installer | php
439
Create a composer.json file in the working directory and fill it with the following content:
440
{
441
    "require": {
442
        "aws/aws-sdk-php": "2.*"
443
    }
444
}
445
Download the AWS SDK for PHP and related dependencies using Composer:
446
shell> cd /tmp/ceph
447
shell> php composer.phar install
448
You can now begin interacting with your object gateway using PHP. For example, here's a simple PHP script to create a new bucket:
449
<?php
450
// create-bucket.php
451
// autoload files
452
require 'vendor/autoload.php';
453
use Aws\S3\S3Client;
454
// instantiate S3 client
455
$s3 = S3Client::factory(array(
456
        'key' => 'YOUR_ACCESS_KEY',
457
        'secret' => 'YOUR_SECRET_KEY',
458
        'endpoint' => 'http://node4'
459
));
460
// create bucket
461
try {
462
  $s3->createBucket(array('Bucket' => 'mybucket'));
463
  echo "Bucket created \n";
464
} catch (Aws\S3\Exception\S3Exception $e) {
465
  echo "Request failed: $e";
466
}
467
This script begins by initializing the Composer auto-loader and an instance of the S3Client object. The object is provided with the access key and secret for the user created earlier, and a custom endpoint points to the object gateway Web server.
468
The S3Client object provides a number of methods to create and manage buckets and objects. One of these is the createBucket() method, which accepts a bucket name and generates the necessary PUT request to create a new bucket in the object gateway.
469
You can run this script at the console as follows:
470
shell> php create-bucket.php
471
Here's an example of what the output might look like:
472
image21.png
473
You can also create a bucket and then add a file to it as an object, using the client object's upload() method. Here's an example:
474
<?php
475
// create-bucket-object.php
476
// autoload files
477
require 'vendor/autoload.php';
478
use Aws\S3\S3Client;
479
// instantiate S3 client
480
$s3 = S3Client::factory(array(
481
        'key' => 'YOUR_ACCESS_KEY',
482
        'secret' => 'YOUR_SECRET_KEY',
483
        'endpoint' => 'http://node4'
484
));
485
// create bucket and upload file to it
486
try {
487
  $s3->createBucket(array('Bucket' => 'myotherbucket'));
488
  $s3->upload('myotherbucket', 'test.tgz', file_get_contents('/tmp/test.tgz'), 'public-read');
489
  echo 'Bucket and object created';     
490
} catch (Aws\S3\Exception\S3Exception $e) {
491
  echo "Request failed: $e";
492
}
493
Of course, you can also list all the buckets and objects available to the authenticated user with the listBuckets() and listObjects() methods:
494
<?php
495
// list-bucket-contents.php
496
// autoload files
497
require 'vendor/autoload.php';
498
use Aws\S3\S3Client;
499
// instantiate S3 client
500
$s3 = S3Client::factory(array(
501
        'key' => 'YOUR_ACCESS_KEY',
502
        'secret' => 'YOUR_SECRET_KEY',
503
        'endpoint' => 'http://node4'
504
));
505
// create bucket and upload file to it
506
try {
507
  $bucketsColl = $s3->listBuckets();
508
  foreach ($bucketsColl['Buckets'] as $bucket) {
509
    echo $bucket['Name'] . "\n";
510
    $objColl = $s3->listObjects(array('Bucket' => $bucket['Name']));
511
     if ($objColl['Contents']) {
512
        foreach ($objColl['Contents'] as $obj) {
513
          echo '- ' . $obj['Key'] . "\n";
514
        }
515
     }
516
  }
517
} catch (Aws\S3\Exception\S3Exception $e) {
518
  echo "Request failed: $e";
519
}
520
Here's an example of what the output might look like:
521
image22.png
522
Of course, you can do a lot more with the AWS SDK for PHP. Refer to the reference documentation for a complete list of methods and example code.
523
524
h3. Conclusion
525
526
As this tutorial has illustrated, Ceph makes it easy to set up a standards-compliant object gateway for your applications or users, with all the benefits of a resilient, infinitely scalable underlying storage cluster.
527
The simple object gateway you created here with VirtualBox is just the tip of the iceberg: you can transition your object gateway to the cloud and run it in federated mode across regions and zones for even greater flexibility, and because the Ceph object gateway is also Swift-compliant, you can maximize compatibility for OpenStack users without any changes to your existing infrastructure. And of course, you can also use the underlying object storage cluster for fault-tolerant Ceph block devices or the POSIX-compliant CephFS filesystem.
528
The bottom line: Ceph's unique architecture gives you improved performance and flexibility without any loss in reliability and security. And it's open source, so you can experiment with it, improve it and use it without worrying about vendor lock-in. You can't get any better than that!
529
530
h3. Read More
531
532
"Introduction to Ceph":http://ceph.com/docs/master/start/intro/
533
"Ceph Architecture":http://ceph.com/docs/master/architecture/
534
"Getting Started With Ceph":http://www.inktank.com/resource/getting-started-with-ceph-miroslav-klivansky/
535
"Introduction to Ceph & OpenStack":http://www.inktank.com/resource/introduction-to-ceph-openstack-miroslav-klivansky/    
536
"Managing A Distributed Storage System At Scale":http://www.inktank.com/resource/managing-a-distributed-storage-system-at-scale-sage-weil/
537
"Scaling Storage With Ceph":http://www.inktank.com/resource/scaling-storage-with-ceph-ross-turk/
538
"Ceph API Documentation":http://ceph.com/docs/master/api/