You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

See also How to install Hadoop distribution from Bigtop 0.6.0 for mode details on where to obtain repo files, etc.

This was done in Centos 6.5

0) Install all the basics in case you have a super raw machine.  Most (or some of these) are probably there.

yum install -y git cmake git-core git-svn subversion checkinstall build-essential dh-make debhelper ant ant-optional autoconf automake liblzo2-dev libzip-dev sharutils libfuse-dev reprepro libtool libssl-dev asciidoc xmlto ssh curl gcc gcc-c++ make fuse protobuf-compiler autoconf automake libtool shareutils asciidoc xmlto lzo-devel zlib-devel fuse-devel openssl-devel python-devel libxml2-devel libxslt-devel cyrus-sasl-devel sqlite-devel mysql-devel openldap-devel rpm-build create-repo redhat-rpm-config wget

1) yum install puppet (you have to use version 2.7.+):

  sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
  yum install puppet-2.7.19

2) Now we git clone bigtop into /opt/

3) cd into /opt/bigtop/bigtop-deploy/puppet and create a file like this under site.csv

        hadoop_head_node,localhost.localdomain
        hadoop_storage_dirs,/data/1,/data/2,/data/3,/data/4
        components,hadoop,yarn
        jdk_package_name,java-1.6.0-openjdk-devel.x86_64
        bigtop_yumrepo_uri,http://bigtop.s3.amazonaws.com/releases/0.7.0/redhat/6/x86_64

4) Make the data dirs (you can have lesser number of data difrectory as far as it is aligned with hadoop_storage_dirs parameter above ).
mkdir /data/1
mkdir /data/2
mkdir /data/3
mkdir /data/4

5) From the /opt/bigtop/bigtop-deploy/puppet directory, run this:

[root@localhost puppet]# puppet -d --modulepath=/opt/bigtop/bigtop-deploy/puppet/modules --confdir=/opt/bigtop/bigtop-deploy/puppet/ /opt/bigtop/bigtop-deploy/puppet/manifests/site.pp

Note: If you plan to use mapreduce, you must also install hadoop-mapreduce.

6) Change the value of the yarn-site.xml yarn.nodemanager.aux.services from "mapreduce_shuffle" to "mapreduce.shuffle"

/etc/init.d/hadoop-yarn-resourcemanager restart

/etc/init.d/hadoop-yarn-nodemanager restart

Bringing the cluster up and down:

To bring the cluster up for the first time (disclaimer: independent execution of Puppet recipes on the cluster's nodes will automatically create HDFS structures and bring-up the services if all dependencies are satisfied, e..g configs are created, packages are installed, etc. If Puppet reports errors you might need to do the manual startup):

1) As root, run

# /etc/init.d/hadoop-hdfs-namenode init (omit unless you want to star with nothing in your HDFS)
# /etc/init.d/hadoop-hdfs-namenode start
# /etc/init.d/hadoop-hdfs-datanode start
# /usr/lib/hadoop/libexec/init-hdfs.sh (not needed after the first run)
# /etc/init.d/hadoop-yarn-resourcemanager start
# /etc/init.d/hadoop-yarn-proxyserver start
# /etc/init.d/hadoop-yarn-nodemanager start

on the master node. 

2) On each of the slave nodes, run

# /etc/init.d/hadoop-hdfs-datanode start
# /etc/init.d/hadoop-yarn-nodemanager start 

To bring the cluster down cleanly:

1) On each of the slave nodes, run

# /etc/init.d/hadoop-yarn-nodemanager stop
# /etc/init.d/hadoop-hdfs-datanode stop

2) On the master, run

# /etc/init.d/hadoop-yarn-nodemanager stop
# /etc/init.d/hadoop-yarn-proxyserver stop
# /etc/init.d/hadoop-yarn-resourcemanager stop
# /etc/init.d/hadoop-hdfs-datanode stop
# /etc/init.d/hadoop-hdfs-namenode stop 
  • No labels