These instructions are for installing and running Hadoop on a OS X single node cluster (MacPro). This tutorial follows the same format and largely the same steps of the incredibly thorough and well-written tutorial by Michael Noll about Ubuntu Cluster Setup. This is pretty much his procedure with changes made for OS X users. I also added other things that I was able to piece together after looking up things from the Hadoop Quickstart and the forums/archives.
Step 1: Creating a designated hadoop user on your system
This isn't entirely necessary, but it's a good idea for security reasons. To add a user, go to:
Click the "+" button near the bottom of the account list. You may need to unlock this ability by hitting the lock icon at the bottom corner and entering the admin username and password.
When the New account window comes out enter a name, as short name and a password. I entered the following:
Once you are done, hit "create account". Now, log in as the hadoop user. You are ready to set up everything!
Step 2: Install/Configure Preliminary Software
Before installing Hadoop, there are a couple things that you need make sure you have on your system.
- Java, and the latest version of the JDK 2. SSH
Because OS X is awesome, you actually don't have to install these things. However, you will have to enable and update what you have. Let's start with Java:
Open up the Terminal application. If it's not already on your dock, you can access it through
Next check to see the version of Java that's currently available on the system:
You may want to update this to Java Sun 6, which is available as an update for OS X 10.5 (Update 1). It's currently only available for 64-bit machines though. You can download it here.
After you download and install the update, you are going to need to configure Java on your system so the default points to this new update. Go to:
Under "Java Version" hit the radio button next to "Java SE 6" Down by "Java Application Runtime Settings" change the order so Java SE 6 (64 bit) is first, followed by Java SE 5 (64 bit) and so on. Hit "Save" and close this window.
Now, when you go to the terminal, and type in "java -version" you should get the following:
and for "javac -version":
SSH: Setting up Remote Desktop and Enabling Self-Login
SSH also comes installed on your Mac. However, you need to enable access to your own machine (so hadoop doesn't ask you for a password at inconvenient times). To do this, go to
Under the list of services, check "Remote Login". For extra security, you can hit the radio button for "Only these Users" and select hadoop
Now, we're going to configure things so we can log into localhost without being asked for a password. Type the following into the terminal:
You should be able to log in without a problem.
You are now ready to install Hadoop. Let's go to step 3!
Step 3: Downloading and Installing Hadoop
So this actually involves a few smaller steps:
- Downloading and Unpacking Hadoop 2. Configuring Hadoop
After we finish these, you should be ready to go! So let's get started:
Downloading and Unpacking Hadoop
Download Hadoop. Make sure you download the latest version (as of this post, 0.17.2 and 0.18.0 are the latest versions). We call our generic version of hadoop hadoop-* in this tutorial.
Unpack the hadoop-*.tar.gz in the directory of your choice. I placed mine in /Users/hadoop. You may also want to set ownership permissions for the directory:
There are two files that we want to modify when we configure Hadoop. The first is conf/hadoop-env.sh . Open this in nano or your favorite text editor and do the following:
- uncomment the export JAVA_HOME line and set it to /Library/Java/Home
- uncomment the export HADOOP_HEAPSIZE line and keep it at 2000
You may want to change other settings as well, but I chose to leave the rest of hadoop-env.sh the same. Here is an idea of what part of mine looks like:
The next part that we need to set up is hadoop-site.xml. The most important parts to set here are hadoop.tmp.dir (which should be set to the directory of your choice) and to add mapred.tasktracker.maximum property to the file. This will effectively set the maximum number of tasks that can simulataneously run by a task tracker. You should also set dfs.replication 's value to one.
Below is a sample hadoop-site.xml file:
Now to our last step!
Step 4: Formatting and Running Hadoop
Our last step involves formatting the namenode and testing our system.
This will give you output along the lines of
Once this is done, we are ready to test our program.
As input for our test, we are going to copy the conf folder up to our DFS.
You can check to see if this actually worked by doing an ls on the dfs as follows:
Now, we need to compile the code. cd into the hadoop-*/ directory and do:
This will compile the example programs found in hadoop-*/src/examples
Now, we will run the example distributed grep program on the conf program as input.
If this works, you'll see something like this pop up on your screen:
The last step is to check if you have output!
You can do this by doing:
The most important part is that the number next to the <r 1> should not be 0.
To check the actual contents of the output do:
Alternatively, you can copy it to local disk and check/modify it:
Stopping the Hadoop DFS
When you're done running jobs on the dfs, run the stop-all.sh command.
That's all! Happy Map Reducing!