Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

  1. You have the latest JDK installed on your system as well. You can either get it from the official Oracle website (http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html) or follow the advice given by your Linux distribution (e.g. some Debian based Linux distributions have JDK packaged as part of their extended set of packages). If your JDK is installed in a non-standard location, make sure to add the line below to the /etc/default/bigtop-utils file
    No Format
    export JAVA_HOME=XXXX
    
  2. Format the namenode
    No Format
    sudo /etc/init.d/hadoop-hdfs-namenode init
    
  3. Start the necessary Hadoop services. E.g. for the pseudo distributed Hadoop installation you can simply do:
    No Format
    for i in hadoop-hdfs-namenode hadoop-hdfs-datanode ; do sudo service $i start ; done
    
  4. Make sure to create a sub-directory structure in HDFS before running any daemons:
    No Format
    sudo -u hdfs hadoop fs -mkdir -p /user/$USER
    sudo -u hdfs hadoop fs -chown $USER:$USER /user/$USER
    sudo -u hdfs hadoop fs -chmod 770 /user/$USER
    
    sudo -u hdfs hadoop fs -mkdir /tmp
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    
    sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
    sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
    
    sudo -u hdfs hadoop fs -mkdir -p /user/history
    sudo -u hdfs hadoop fs -chown mapred:mapred /user/history
    sudo -u hdfs hadoop fs -chmod 770 /user/history
    
    sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop-yarn/staging
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging
    
    sudo -u hdfs hadoop fs -mkdir -p /tmp/hadoop-yarn/staging/history/done_intermediate
    sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging/history/done_intermediate
    sudo -u hdfs hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/staging
    
  5. Now start YARN daemons:
    No Format
    sudo service hadoop-yarn-resourcemanager start
    sudo service hadoop-yarn-nodemanager start
    
  6. Enjoy your cluster
    No Format
    hadoop fs -lsr /
    hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples*.jar pi 10 1000
    
  7. If you are using Amazon AWS it is important the IP address in /etc/hostname matches the Private IP Address in the AWS Management Console. If the addresses do not match Map Reduce programs will not complete.

    No Format
    ubuntu@ip-10-224-113-68:~$ cat /etc/hostname
    ip-10-224-113-68
    
  8. If the IP address in /etc/hostname does not match then open the hostname file in a text editor, change and reboot

HTML
Wiki Markup
{html}
<h1>Running Hadoop Components </h1>
<h3>
<a href="https://cwiki.apache.org/confluence/display/BIGTOP/Running+various+Bigtop+components" target="_blank">Linky -> Step-by-step instructions on running Hadoop Components!</a>
</h3>
{html}

One of the advantages of Bigtop is the ease of installation of the different Hadoop Components without having to hunt for a specific Hadoop Component distribution and matching it with a specific Hadoop version.
Please visit the link above to run some easy examples from the Bigtop distribution !
Provided at the link above are examples to run Hadoop 1.0.1 and nine other components from the Hadoop ecosystem (hive/hbase/zookeeper/pig/sqoop/oozie/mahout/whirr and flume).
See the

HTML
Wiki Markup
{html}<a href="https://github.com/apache/bigtop/blob/master/bigtop.mk" target="_blank">Bigtop Make File</a>{html}

...

Where to go from here

It is highly recommended that you read documentation provided by the Hadoop project itself

...