Skip to end of metadata
Go to start of metadata

This page is updated for whirr 0.8.1, especially from multiple nodes at "Launch a cluster". Please follow the current Quick Start Guide linked from whirr.apache.org as well.

Getting Started with Whirr

See also http://incubator.apache.org/whirr/quick-start-guide.html

Whirr CLI

Pre-requisites

You need to install Java 6 on your machine. Also, you need to have an account with a cloud provider, such as Amazon EC2.

Install Whirr

Download or build Whirr. Call the directory which contains the Whirr JAR files WHIRR_HOME (you might like to define this environment variable).

You can test that Whirr is working by running:

(The above JAR no longer includes a main reference in its manifest. This information is left for informational purposes. The preferred means of starting is the script in bin/. As noted above, follow the new instructions instead.)

It is handy to create an alias for whirr, and for one including cloud credentials:

Launch a cluster

The following will launch a Hadoop cluster with a single machine for the namenode and jobtracker, and a further machine for a datanode and tasktracker.

Once the cluster has launched you can browse it by connecting to http://master-host:50030.

The following will launch a Hadoop cluster with multiple nodes on AWS EC2. You may want to take a look at or use the attached hbase.properties file:

Login to the remote master node

Once launching is successful, you will see the following SSH info to all nodes.

Log in the master node, the last one, to run hadoop code with hbase data. Then, you can flexibly execute your Hadoop codes integrated with HBase. User name is your local login, eg, jongwook as a user name:

Setup path and CLASSPATH to run hbase and hadoop codes. You need to make sure what HADOOP and HBASE you have installed at /usr/local.

First run Hadoop pi demo at the remote node in order to make sure if Hadoop works:

Second, run HBase demo in order to make sure if HBase works:

Configuration

Whirr is configured using a properties file, and optionally using command line arguments when using the CLI. Command line arguments take precedence over properties specified in a properties file.

See Configuration Guide for more on configuration.

Destroy a cluster

When you've finished using a cluster you can terminate the instances and clean up resources with

The following will destroy a Hadoop cluster with multiple nodes on AWS EC2

Whirr API

Whirr provides a Java API for stopping and starting clusters. Please see the unit test source code for how to achieve this.

There's also some example code at http://github.com/hammer/whirr-demo.

  • No labels