Project

General

Profile

Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox » History » Version 6

« Previous - Version 6/10 (diff) - Next » - Current version
Jessica Mack, 06/22/2015 11:50 PM


Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox

Introducing the Ceph Object Gateway

Ceph is a highly reliable distributed storage system, with self-healing and self-managing characteristics. One of its unique characteristics is its unified storage interface, which supports object storage, block device storage and file system storage all in the same Ceph cluster. Of course, it's also open source, so you can freely download and experiment with it at your leisure.
The Ceph Object Gateway provides a way to host scalable data storage "buckets", similar to those provided by Amazon Simple Storage Service and OpenStack Swift. These objects are accessible via a REST API, making them ideal for cloud-based applications, big data storage and processing, and many other use cases. And because the underlying cluster infrastructure is managed by Ceph, fault-tolerance and scalability are guaranteed.
Setting up a Ceph object gateway can be a little complex, especially if you're unfamiliar with how scalable object storage works. That's where this tutorial comes in. Over the next few pages, I'll walk you through the process of setting up a Ceph-based object gateway and adding data to it. We'll set up the cluster using VirtualBox, so you'll get a chance to see Ceph's object storage features in action in a "real" environment where you have total control, but which doesn't cost you anything to run or scale out with new nodes.
Sounds good? Keep reading.

Assumptions and Requirements

For this tutorial, I'll be using VirtualBox, which provides an easy way to set up independent virtual servers, with CentOS as the operating system for the virtual servers. VirtualBox is available for Windows, Linux, Macintosh, and Solaris hosts. I'll make the following assumptions:
  • You have a working knowledge of CentOS, VirtualBox and VirtualBox networking.
  • You have downloaded and installed the latest version of VirtualBox.
  • You have either already configured 5 virtual CentOS servers, or you have downloaded an ISO installation image for the latest version of CentOS (CentOS 7.0 at the time of writing). These servers must be using kernel version 3.10 or later
  • You're familiar with installing software using the yum, the CentOS package manager.
  • You’re familiar with SSH-based authentication.
  • You're familiar with object storage in the cloud.
In case you’re not familiar with the above topics, look in the “Read More” section at the end of this tutorial, which has links to relevant guides.
To set up a Ceph storage cluster with VirtualBox, here are the steps you'll follow:
  1. Create cluster nodes
  2. Install the Ceph deployment toolkit
  3. Configure authentication between cluster nodes
  4. Configure and activate a cluster monitor
  5. Prepare and activate OSDs
  6. Verify cluster health
  7. Test the cluster
  8. Install the Ceph object gateway
  9. Configure the Ceph object gateway
  10. Start working with buckets and objects

The next sections will walk you through these steps in detail.

Step 1: Create Cluster Nodes

If you already have 5 virtual CentOS servers configured and talking to each other, you can skip this step. If not, you must first create the virtual servers that will make up your Ceph cluster. To do this:

1. Launch VirtualBox and use the Machine -> New menu to create a new virtual server.

2. Keeping in mind that you will need 5 virtual servers running simultaneously, calculate the available RAM on the host system and set the server memory accordingly.

3. Add a virtual hard drive of at least 10 GB.

4. Ensure that you have an IDE controller with a virtual CD/DVD drive (to enable CentOS installation) and at least two network adapters, one NAT (to enable download of required software) and one bridged adapter or internal network adapter (for internal communication between the cluster nodes).
5. Once the server basics are defined, install CentOS to the server using the ISO installation image. Ensure that your kernel version is at least 3.10 or later.
6. Once the installation process is complete, log in to the server and configure the second network interface with a static IP address, by editing the appropriate template file in the /etc/sysconfig/network-scripts/ directory. Here's a sample of what the interface configuration might look like:

HWADDR=08:00:27:AE:14:41
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=5fc74119-1ab2-4c0c-9aa1-284fd484e6c6
ONBOOT=no
IPADDR=192.168.1.25
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DNS2=8.8.8.8

Should any of the above steps be unfamiliar to you, refer to the VirtualBox manual, especially the VirtualBox networking guide, and to the networking section of the CentOS deployment guide.

Repeat this process until you have 5 virtual servers. Of these, identify one as the cluster administration node and assign it the hostname admin-node. The remaining servers may be identified with hostnames such as node1, node2, and so on. Here's an example of what the final cluster might look like (note that you should obviously modify the IP addresses to match your local network settings).

Server host name IP address Purpose
admin-node 192.168.1.25 Administration node for cluster
node1 192.168.1.26 Monitor
node2 192.168.1.27 OSD daemon
node3 192.168.1.28 OSD daemon
node4 192.168.1.29 Object gateway host / PHP client

Before proceeding to the next step, ensure that all the servers are accessible by pinging them using their host names. If you don't have a local DNS server, add the host names and IP addresses to each server's /etc/hosts file to ease network access.

Step 2: Install the Ceph Deployment Toolkit

The next step is to install the Ceph deployment toolkit on the administration node. This toolkit will help install Ceph on the nodes in the cluster, as well as prepare and activate the cluster.
1. Log in to the administration node as the root user.
2. Add the package to the yum repository by creating a new file at /etc/yum.repos.d/ceph.repo with the following content:

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b...ys/release.asc

3. Update the repository.
shell> yum update
4. Install the Ceph deployment toolkit.
shell> yum install ceph-deploy

Step 3: Configure Authentication between Cluster Nodes

Now, you need to create a ceph user on each server in the cluster, including the administration node. This user account will handle performing cluster-related operations on each node. Perform the following steps on each of the 5 virtual servers:
1. Log in as the root user.
2. Create a ceph user account.
shell> useradd ceph
shell> passwd ceph

3. Give the ceph user account root privileges with sudo.
shell> echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
shell> chmod 0440 /etc/sudoers.d/ceph

4. Disable 'requiretty' for the ceph user.
shell> sudo visudo
5. In the resulting file, locate the line containing
Defaults requiretty
and change it to read
Defaults:ceph !requiretty

Now, set up passphraseless SSH between the nodes:
1. Log in to the administration node as the ceph user.
2. Generate an SSH key for the administration node.
shell> ssh-keygen

3. Copy the generated public key to the ceph user account of all the nodes in the cluster.
shell> ssh-copy-id ceph@node1
shell> ssh-copy-id ceph@node2
shell> ssh-copy-id ceph@node3
shell> ssh-copy-id ceph@node4
shell> ssh-copy-id ceph@admin-node

4. Test that the ceph user on the administration node can log in to any other node as ceph using SSH and without providing a password.
shell> ssh ceph@node1

5. Modify the administration node's SSH configuration file so that it can easily log in to each node as the ceph user. Create the /home/ceph/.ssh/config file with the following lines:

Host node1
Hostname node1
User ceph
Host node2
Hostname node2
User ceph
Host node3
Hostname node3
User ceph
Host node4
Hostname node4
User ceph
Host admin-node
Hostname admin-node
User ceph

6. Change the permissions of the /home/ceph/.ssh/config file.
shell> chmod 0400 ~/.ssh/config
7. Test that the ceph user on the administration node can log in to any other node using SSH and without providing a password or username.
shell> ssh node1

Finally, create a directory on the administration node to store cluster information, such as configuration files and keyrings.
shell> mkdir my-cluster
shell> cd my-cluster

You're now ready to begin preparing and activating the cluster!

Step 4: Configure and Activate a Cluster Monitor

A Ceph storage cluster consists of two types of daemons:
  • Monitors maintain copies of the cluster map
  • Object Storage Daemons (OSD) store data as objects on storage nodes

Apart from this, other actors in a Ceph storage cluster include metadata servers and clients such as Ceph block devices, Ceph object gateways or Ceph filesystems. Read more about Ceph’s architecture.

All the commands in this and subsequent sections are to be run when logged in as the ceph user on the administration node, from the my-cluster/ directory. Ensure that you are directly logged in as ceph and are not using root with su - ceph.

A minimal system will have at least one monitor and two OSD daemons for data replication.
1. Begin by setting up a Ceph monitor on node1 with the Ceph deployment toolkit.
shell> ceph-deploy new node1
This will define the name of the initial monitor node and create a default Ceph configuration file and monitor keyring in the current directory.

2. Change the number of replicas in the Ceph configuration file at /home/ceph/my-cluster/ceph.conf from 3 to 2 so that Ceph can achieve a stable state with just two OSDs. Add the following line in the [global] section:
osd pool default size = 2
osd pool default min size = 2

3. In the same file, set the OSD journal size. A good general setting is 10 GB; however, since this is a simulation, you can use a smaller amount such as 4 GB. Add the following line in the [global] section:
osd journal size = 4000
4. In the same file, set the default number of placement groups for a pool. Since we’ll have less than 5 OSDs, 128 placement groups per pool should suffice. Add the following line in the [global] section:
osd pool default pg num = 128
5. Install Ceph on each node in the cluster, including the administration node.
shell> ceph-deploy install admin-node node1 node2 node3 node4
The Ceph deployment toolkit will now go to work installing Ceph on each node. Here's an example of what you will see during the installation process.

Create the Ceph monitor on node1 and gather the initial keys.
shell> ceph-deploy mon create-initial node1

Step 5: Prepare and Activate OSDs

The next set is to prepare and activate Ceph OSDs. We'll need a minimum of 2 OSDs, and these should be set up on node2 and node3, as it's not recommended to mix monitors and OSD daemons on the same host. To begin, set up an OSD on node2 as follows:
Log into node2 as the ceph user.
shell> ssh node2
Create a directory for the OSD daemon.
shell> sudo mkdir /var/local/osd
Log out of node2. Then, from the administrative node, prepare and activate the OSD.
shell> ceph-deploy osd prepare node2:/var/local/osd

shell> ceph-deploy osd activate node2:/var/local/osd

Repeat the above steps for node3.
At this point, the OSD daemons have been created and the storage cluster is ready.

Step 6: Verify Cluster Health

Copy the configuration file and admin keyring from the administration node to all the nodes in the cluster.
shell> ceph-deploy admin admin-node node1 node2 node3 node4

Log in to each node as the ceph user and change the permissions of the admin keyring.
shell> ssh node1
shell> sudo chmod +r /etc/ceph/ceph.client.admin.keyring
You should now be able to check cluster health from any node in the cluster with the ceph status command. Ideally, you want to see the status active + clean, as that indicates the cluster is operating normally.
shell> ceph status

Step 7: Test the Cluster

You can now perform a simple test to see the distributed Ceph storage cluster in action, by writing a file on one node and retrieving it on another:
Log in to node1 as the ceph user.
shell> ssh node1
Create a new file with some dummy data.
shell> echo "Hello world" > /tmp/hello.txt
Data is stored in Ceph within storage pools, which are logical groups in which to organize your data. By default, a Ceph storage cluster has 3 pools - data, metadata and rbd - and it's also possible to create your own custom pools. In this case, copy the file to the data pool with the rados put command and assign it a name.
shell> rados put hello-object /tmp/hello.txt --pool data
To verify that the Ceph storage cluster stored the object:
Log in to node2 as the ceph user.
Check that the file exists in the cluster's data storage pool with the rados ls command.
shell> rados ls --pool data
Copy the file out of the storage cluster to a local directory with the rados get command and verify its contents
shell> rados get hello-object /tmp/hello.txt --pool data
shell> cat hello.txt

Step 8: Install the Ceph Object Gateway

Now that the cluster is operating, it’s time to do something with it. First, you must install and configure an Apache Web server with FastCGI on node4, as described below.
Log into node4 as the ceph user.
shell> ssh node4
Install Apache and FastCGI from the Ceph repositories. To do this, you need to first install the yum priorities plugin, then add the repositories to your yum repository list.
shell> sudo yum install yum-plugin-priorities
Edit the /etc/yum/pluginconf.d/priorities.conf file and ensure it looks like this:
[main]
enabled = 1
Create a file at /etc/yum.repos.d/ceph-apache.repo and fill it with the following content:
[apache2-ceph-noarch]
name=Apache noarch packages for Ceph
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
[apache2-ceph-source]
name=Apache source packages for Ceph
baseurl=http://gitbuilder.ceph.com/apache2-r...sic/ref/master
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
Create a file at /etc/yum.repos.d/ceph-fastcgi.repo and fill it with the following content:
[fastcgi-ceph-basearch]
name=FastCGI basearch packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
[fastcgi-ceph-noarch]
name=FastCGI noarch packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
[fastcgi-ceph-source]
name=FastCGI source packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastc...sic/ref/master
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=b.../autobuild.asc
Update the repository and install Apache and FastCGI.
shell> sudo yum update
shell> sudo yum install httpd mod_fastcgi
Edit the /etc/httpd/conf/httpd.conf file and modify the ServerName directive to reflect the server's host name. Uncomment the line if needed.
ServerName node4
Review the files in the /etc/httpd/conf.modules.d/* directory to ensure that Apache's URL rewriting and FastCGI modules are enabled. You should find the following entries in the files:
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule fastcgi_module modules/mod_fastcgi.so
In case these entries don't exist, add them to the end of the /etc/httpd/conf/httpd.conf file.
Restart Apache.
shell> sudo service httpd restart
Amazon S3 lets you refer to buckets using subdomains, such as http://mybucket.s3.amazonaws.com. You can also accomplish this with Ceph, but you must first install a local DNS server like dnsmasq and add support for wildcard subdomains. Follow these steps:
Log into node4 as the ceph user.
shell> ssh node4
Install dnsmasq.
shell> yum install dnsmasq
Edit the dnsmasq configuration file at /etc/dnsmasq.conf and add the following line to the end of the file:
address=/.node4/192.168.1.29
Save the file and restart dnsmasq.
shell> sudo service dnsmasq restart
If necessary, update the /etc/resolv.conf file on the client host so that it knows about the new DNS server.
nameserver 192.168.1.29
You should now be able to successfully ping any subdomain of *.node4, such as mybucket.node4 or example.node4, as shown in the image below.

TIP: If you're not able to configure wildcard subdomains, you can also simply decide a list of subdomains you wish to use and then add them as static entries to the client system's /etc/hosts file. Ensure that the entries resolve to the node4 virtual host.
The final step is to install radosgw on node4:
shell> ssh node4
shell> sudo yum install ceph-radosgw
At this point, you have a Web server running with the Ceph object gateway and FastCGI support, and subdomains that resolve to the object gateway host.

Step 9: Configure the Ceph Object Gateway

The next step is to configure the Ceph Object Gateway daemon. Follow these steps:
Log into the administration node as the ceph user.
shell> ssh admin-node
Create a keyring for the gateway.
shell> sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
shell> sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
Generate a user name and key to use when accessing the gateway. For this example, the user name is client.radosgw.gateway.
shell> sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
Add read and write capabilities to the new key:
shell> sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
Add the new key to the storage cluster and distribute it to the object gateway node.
shell> sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
shell> sudo scp /etc/ceph/ceph.client.radosgw.keyring ceph@node4:/home/ceph
shell> ssh node4
shell> sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
shell> exit
This process should also have created a number of storage pools for the gateway. You can verify this by running the following command and verifying that the output includes various .rgw pools.
shell> rados lspools

Change to your cluster configuration directory.
shell> cd ~/my-cluster
Edit the Ceph configuration file at ~/my-cluster/ceph/ceph.conf and add a new [client.radosgw.gateway] section to it, as below:
[client.radosgw.gateway]
host = node4
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw dns name = node4
rgw print continue = false
Transmit the new Ceph configuration file to all the other nodes in the cluster.
shell> ceph-deploy config push admin-node node1 node2 node3 node4
Log into node4 as the ceph user.
shell> ssh node4
Add a Ceph object gateway script, by creating a file at /var/www/html/s3gw.fcgi with the following content:
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
Change the permissions of the script to make it executable.
shell> sudo chmod +x /var/www/html/s3gw.fcgi
Create a data directory for the radosgw daemon.
shell> sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
Add a gateway configuration file, by creating a file at /etc/httpd/conf.d/rgw.conf and filling it with the following content:
FastCgiExternalServer /var/www/html/s3gw.fcgi -socket /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
<VirtualHost :80>
ServerName node4
ServerAlias *.node4
ServerAdmin admin@localhost
DocumentRoot /var/www/html
RewriteEngine On
RewriteRule ^/(.
) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
<IfModule mod_fastcgi.c>
<Directory /var/www/html>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
AllowEncodedSlashes On
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
ServerSignature Off
</VirtualHost>
<VirtualHost :443>
ServerName node4
ServerAlias *.node4
ServerAdmin admin@localhost
DocumentRoot /var/www/html
RewriteEngine On
RewriteRule ^/(.
) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
<IfModule mod_fastcgi.c>
<Directory /var/www/html>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
AllowEncodedSlashes On
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
ServerSignature Off
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/apache.key
SetEnv SERVER_PORT_SECURE 443
</VirtualHost>
Edit the /etc/httpd/conf.d/fastcgi.conf file and ensure that the line referencing the FastCgiWrapper looks like this:
FastCgiWrapper off
Restart the Apache server, followed by the radosgw daemon.
shell> sudo service httpd restart
shell> sudo /etc/init.d/ceph-radosgw restart
You can quickly test that the object gateway is running by sending an HTTP GET request to the Web server, as shown below:

At this point, your Ceph object gateway is running and you can begin using it.

Step 10: Start Working with Buckets and Objects

Before you can begin using the Ceph object gateway, you must create a user account.
Log in to node4 as the ceph user.
shell> ssh admin-node
Create a new user account using the radosgw-admin command. In this example, the user is named 'john'.
shell> radosgw-admin user create --uid=john --display-name="Example User"
Here's an example of what you should see. Note the access key and secret key in the output, as you will need this to access the object gateway from another client.

You can also verify that the user was created with the following command:
shell> radosgw-admin user info --uid=john
While you can interact with the object gateway directly over HTTP, by sending authenticated GET, PUT and DELETE requests to the gateway endpoints, an easier way is with Amazon's AWS SDK. This SDK includes classes and constructs to help you work with buckets and objects in Amazon S3. Since the Ceph object gateway is S3-compatible, you can use the same SDK to interact with it as well.
The AWS SDK is available for multiple programming languages. In the examples that follow, I'll use the AWS SDK for PHP, but you will find code examples for other languages as well on the AWS developer website.
Log in to node4 (which will now also double as the client node) as the root user and install PHP and related tools.
shell> sudo yum install php curl php-curl
Create a working directory for your PHP files. Download Composer, the PHP dependency manager, into this directory.
shell> cd /tmp
shell> mkdir ceph
shell> cd ceph
shell> curl -sS https://getcomposer.org/installer | php
Create a composer.json file in the working directory and fill it with the following content: {
"require": {
"aws/aws-sdk-php": "2.*"
}
}
Download the AWS SDK for PHP and related dependencies using Composer:
shell> cd /tmp/ceph
shell> php composer.phar install
You can now begin interacting with your object gateway using PHP. For example, here's a simple PHP script to create a new bucket:
// create-bucket.php
// autoload files
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// instantiate S3 client
$s3 = S3Client::factory(array(
'key' => 'YOUR_ACCESS_KEY',
'secret' => 'YOUR_SECRET_KEY',
'endpoint' => 'http://node4'
));
// create bucket
try {
$s3->createBucket(array('Bucket' => 'mybucket'));
echo "Bucket created \n";
} catch (Aws\S3\Exception\S3Exception $e) {
echo "Request failed: $e";
}
This script begins by initializing the Composer auto-loader and an instance of the S3Client object. The object is provided with the access key and secret for the user created earlier, and a custom endpoint points to the object gateway Web server.
The S3Client object provides a number of methods to create and manage buckets and objects. One of these is the createBucket() method, which accepts a bucket name and generates the necessary PUT request to create a new bucket in the object gateway.
You can run this script at the console as follows:
shell> php create-bucket.php
Here's an example of what the output might look like:

You can also create a bucket and then add a file to it as an object, using the client object's upload() method. Here's an example:
// create-bucket-object.php
// autoload files
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// instantiate S3 client
$s3 = S3Client::factory(array(
'key' => 'YOUR_ACCESS_KEY',
'secret' => 'YOUR_SECRET_KEY',
'endpoint' => 'http://node4'
));
// create bucket and upload file to it
try {
$s3->createBucket(array('Bucket' => 'myotherbucket'));
$s3->upload('myotherbucket', 'test.tgz', file_get_contents('/tmp/test.tgz'), 'public-read');
echo 'Bucket and object created';
} catch (Aws\S3\Exception\S3Exception $e) {
echo "Request failed: $e";
}
Of course, you can also list all the buckets and objects available to the authenticated user with the listBuckets() and listObjects() methods:
// list-bucket-contents.php
// autoload files
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// instantiate S3 client
$s3 = S3Client::factory(array(
'key' => 'YOUR_ACCESS_KEY',
'secret' => 'YOUR_SECRET_KEY',
'endpoint' => 'http://node4'
));
// create bucket and upload file to it
try {
$bucketsColl = $s3->listBuckets();
foreach ($bucketsColl['Buckets'] as $bucket) {
echo $bucket['Name'] . "\n";
$objColl = $s3->listObjects(array('Bucket' => $bucket['Name']));
if ($objColl['Contents']) {
foreach ($objColl['Contents'] as $obj) {
echo '- ' . $obj['Key'] . "\n";
}
}
}
} catch (Aws\S3\Exception\S3Exception $e) {
echo "Request failed: $e";
}
Here's an example of what the output might look like:

Of course, you can do a lot more with the AWS SDK for PHP. Refer to the reference documentation for a complete list of methods and example code.

Conclusion

As this tutorial has illustrated, Ceph makes it easy to set up a standards-compliant object gateway for your applications or users, with all the benefits of a resilient, infinitely scalable underlying storage cluster.
The simple object gateway you created here with VirtualBox is just the tip of the iceberg: you can transition your object gateway to the cloud and run it in federated mode across regions and zones for even greater flexibility, and because the Ceph object gateway is also Swift-compliant, you can maximize compatibility for OpenStack users without any changes to your existing infrastructure. And of course, you can also use the underlying object storage cluster for fault-tolerant Ceph block devices or the POSIX-compliant CephFS filesystem.
The bottom line: Ceph's unique architecture gives you improved performance and flexibility without any loss in reliability and security. And it's open source, so you can experiment with it, improve it and use it without worrying about vendor lock-in. You can't get any better than that!

Read More

Introduction to Ceph
Ceph Architecture
Getting Started With Ceph
Introduction to Ceph & OpenStack
Managing A Distributed Storage System At Scale
Scaling Storage With Ceph
Ceph API Documentation

image1.jpg View (81.9 KB) Jessica Mack, 06/22/2015 11:06 PM

image2.jpg View (67.5 KB) Jessica Mack, 06/22/2015 11:06 PM

image3.jpg View (83.7 KB) Jessica Mack, 06/22/2015 11:06 PM

image4.jpg View (128 KB) Jessica Mack, 06/22/2015 11:06 PM

image52.jpg View (101 KB) Jessica Mack, 06/22/2015 11:06 PM

image62.jpg View (111 KB) Jessica Mack, 06/22/2015 11:08 PM

image72.jpg View (41.7 KB) Jessica Mack, 06/22/2015 11:08 PM

image82.jpg View (41.1 KB) Jessica Mack, 06/22/2015 11:08 PM

image92.jpg View (197 KB) Jessica Mack, 06/22/2015 11:08 PM

image102.jpg View (183 KB) Jessica Mack, 06/22/2015 11:08 PM

image112.jpg View (206 KB) Jessica Mack, 06/22/2015 11:12 PM

image122.jpg View (197 KB) Jessica Mack, 06/22/2015 11:12 PM

image132.jpg View (198 KB) Jessica Mack, 06/22/2015 11:12 PM

image142.jpg View (206 KB) Jessica Mack, 06/22/2015 11:12 PM

image152.jpg View (84.2 KB) Jessica Mack, 06/22/2015 11:12 PM

image162.jpg View (82.2 KB) Jessica Mack, 06/22/2015 11:12 PM

image172.png View (15.5 KB) Jessica Mack, 06/22/2015 11:15 PM

image182.png View (12 KB) Jessica Mack, 06/22/2015 11:15 PM

image192.png View (18.4 KB) Jessica Mack, 06/22/2015 11:15 PM

image202.png View (17.3 KB) Jessica Mack, 06/22/2015 11:15 PM

image212.png View (10.5 KB) Jessica Mack, 06/22/2015 11:15 PM

image222.png View (10.9 KB) Jessica Mack, 06/22/2015 11:15 PM