Running Hadoop on Amazon EC2

Amazon EC2 (Elastic Compute Cloud) is a computing service. One allocates a set of hosts, and runs one's application on them, then, when done, de-allocates the hosts. Billing is hourly per host. Thus EC2 permits one to deploy Hadoop on a cluster without having to own and operate that cluster, but rather renting it on an hourly basis.

If you run Hadoop on EC2 you might consider using AmazonS3 for accessing job data (data transfer to and from S3 from EC2 instances is free). Initial input can be read from S3 when a cluster is launched, and the final output can be written back to S3 before the cluster is decomissioned. Intermediate, temporary data, only needed between MapReduce passes, is more efficiently stored in Hadoop's DFS. See AmazonS3 for more details.

This document assumes that you have already followed the steps in Amazon's Getting Started Guide. In particular, you should have run through the sections "Setting up an Account", "Setting up the Tools" and the "Generating a Keypair" section of "Running an Instance".

Note that the older, manual step-by-step guide to getting Hadoop running on EC2 can be found here.

Version 0.17 of Hadoop includes a few changes that provide support for multiple simultaneous clusters, quicker startup times for large clusters, and includes a pre-configured Ganglia installation. These differences are noted below.

Note: Cloudera also provides their distribution for Hadoop as an EC2 AMI with single command deployment and support for Hive/Pig out of the box.




In this document, commands lines that start with '#' are executed on an Amazon instance, while command lines starting with a '%' are executed on your workstation.


Clusters of Hadoop instances are created in a security group. Instances within the group have unfettered access to one another. Machines outside the group (such as your workstation), can only access instance on port 22 (for SSH), port 50030 (for the JobTracker's web interface, permitting one to view job status), and port 50060 (for the TaskTracker's web interface, for more detailed debugging).

Setting up

Running a job on a cluster

Running a job on a cluster from a remote machine

In some cases it's desirable to be able to submit a job to a hadoop cluster running in EC2 from a machine that's outside EC2 (for example a personal workstation). Similarly - it's convenient to be able to browse/cat files in HDFS from a remote machine. One of the advantages of this setup is that it obviates the need to create custom AMIs that bundle stock Hadoop AMIs and user libraries/code. All the non-Hadoop code can be kept on the remote machine and can be made available to Hadoop during job submission time (in the form of jar files and other files that are copied into Hadoop's distributed cache). The only downside being the cost of copying these data sets into EC2 and the latency involved in doing so.

The recipe for doing this is well documented in this Cloudera blog post and involves configuring hadoop to use a ssh tunnel through the master hadoop node. In addition - this recipe only works when using EC2 scripts from versions of Hadoop that have the fix for HADOOP-5839 incorporated. (Alternatively, users can apply patches from this JIRA to older versions of Hadoop that do not have this fix).


Running Hadoop on EC2 involves a high level of configuration, so it can take a few goes to get the system working for your particular set up.

If you are having problems with the Hadoop EC2 launch-cluster command then you can run the following in turn, which have the same effect but may help you to see where the problem is occurring:

% bin/hadoop-ec2 launch-master <cluster-name>
% bin/hadoop-ec2 launch-slaves <cluster-name> <num slaves>

Note you can call the launch-slaves command as many times as necessary to grow your cluster. Shrinking a cluster is more tricky and should be done by hand (after balancing file replications etc).

To browse all your nodes via a web browser, starting at the 50030 status page, start the following command in a new shell window:

% bin/hadoop-ec2 proxy <cluster-name>

This command will start a SOCKS tunnel through your master node, and print out all the URLs you can reach from you web browser. For this to work, you must configure your browser to send requests over SOCKS to the local proxy on port 6666. The FireFox plugin FoxyProxy is great for this.

Currently, the scripts don't have much in the way of error detection or handling. If a script produces an error, then you may need to use the Amazon EC2 tools for interacting with instances directly - for example, to shutdown an instance that is mis-configured.

Another technique for debugging is to manually run the scripts line-by-line until the error occurs. If you have feedback or suggestions, or need help then please use the Hadoop mailing lists.

If you are finding that all your nodes are not showing up, you can point your browser to the Ganglia status page for your cluster at http://<MASTER_HOST>/ganglia/, after starting the proxy command.

Building your own Hadoop image

The public images should be sufficient for most needs, however there are circumstances where you would like to build your own images, perhaps because an image with the version of Hadoop you want isn't available (an older version, the latest trunk version, or a patched version), or because you want to run extra software on your instances.


Here is a high-level outline of how the scripts for creating a Hadoop AMI work. For details, please see the scripts' sources (linked to below).

  1. The main script, create-hadoop-image starts a Fedora core Amazon AMI.

  2. Once the Fedora instance has launched create-hadoop-image copies the environment variables file ( and scripts to run on the Fedora instance (create-hadoop-image-remote and hadoop-init) then it logs into the Fedora instance and runs create-hadoop-image-remote.

  3. The script create-hadoop-image-remote then installs Java, tools required to run Hadoop, and Hadoop itself. Then it configures Hadoop:

    • In EC2, the local data volume is mounted as /mnt, so logs are written under here.

    • hadoop-init is installed as an init script to be run on instance start up. This takes advantage of an EC2 feature called parameterized launches. For Hadoop, this allows the master hostname and the cluster size to be retrieved when the Hadoop instance starts - this information is used to finish the Hadoop configuration.

  4. Finally, create-hadoop-image-remote bundles the machine as an AMI, and uploads it to S3. (Particular care has to be taken to ensure that no secrets, such as private keys, are bundled in the AMI. See here for more details.) The AMI is stored in a bucket named by the variable $S3_BUCKET and with the name hadoop-$HADOOP_VERSION.

  5. Control then returns to create-hadoop-image which registers the image with EC2.

Building a stock Hadoop image

If you need to repeat this procedure to re-create an AMI then you will need to run ec2-deregister to de-register the existing AMI. You might also want to use ec2-delete-bundle command to remove the AMI from S3 if you no longer need it.

Building a customized Hadoop image

If you want to build an image with a version of Hadoop that is not available from the Apache distribution site (e.g. trunk, or a patched version) then you will need to alter the create-hadoop-image-remote script to retrieve and install your required version of Hadoop. Similarly, if you wish to install other software on your image then the same script is the place to do it.

Making an image public

Since there are already public Hadoop AMIs available you shouldn't need to do this. (At least consider discussing it on the developer mailing list first, please.) Furthermore, you should only do this if you are sure you have produced a secure AMI.

% ec2-modify-image-attribute AMI -l -a all

where AMI is the ID of the AMI you want to publish.

See Introduction to Sharing AMIs for more details.


AmazonEC2 (last edited 2011-07-29 12:57:58 by SteveLoughran)