Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

a. Lay down the build into appropriate places. Let’s start with the Ranger web admin first.

Code Block
cd /usr/local 

...


sudo tar zxvf ~/dev/incubator­-ranger/target/ranger-­0.5.0-­admin.tar.gz

...


sudo ln ­-s ranger-­0.5.0-­admin ranger-

...

­admin 
cd /usr/local/

...

ranger­admin 

b. Verify the root password that you had picked while installing mysql. I had chosen root so the relevant section in my install.properties file looks as follows:

Code Block
db_root_user=

...

root 
db_root_password=root

...


db_host=

...

localhost


c.The

...

install

...

process

...

would

...

create

...

a

...

couple

...

of

...

users

...

in

...

the

...

database

...

for

...

storing administration

...

and

...

audit

...

information,

...

pick

...

passwords

...

for

...

those

...

too.

...

With

...

my choices

...

here’s

...

how

...

the

...

relevant

...

sections

...

in

...

the

...

install.properties

...

file

...

look

...

now.

Code Block
languagebash
  # DB UserId used for the XASecure schema 
  #
  db_name=ranger 
  db_user=rangeradmin 
  db_password=rangeradmin 
  # DB UserId for storing auditlog infromation 
  # 
  audit_db_name=ranger 
  audit_db_user=rangerlogger 
  audit_db_password=rangerlogger

...

Configuring Ranger Admin Authentication Modes :

  •  AD ACTIVE DIRECTORY

      To enable active directory authentication on Ranger admin, you need to configure following properties of install.properties

...

               c. Configure a load balancer to load balance among ranger admin instances and note down the load balancer URL.                             Software

        •  Software (e.g. Apache httpd) or hardware load balancer could be used.
        •  Details outside the scope of this document.                              

        *   Details outside the scope of this document.                                              d. Update the policy manager external URL in all the clients of ranger admin (ranger user sync and ranger plugins) to point to the load balancer URL.

...

 a. We’ll start by extracting out build at the appropriate place. 

Code Block
cd /usr/local 

...


sudo tar ­zxvf ~/dev/ incubator­-ranger/target/ranger-­0.5.0-­usersync.tar.gz

...


sudo ln ­-s ranger-­0.5.0-­usersync ranger-­usersync

...


sudo mkdir ­-p /var/log/ranger-

...

­usersync 
sudo chown ranger /var/log/ranger­-

...

usersync 
sudo chgrp ranger /var/log/ranger­-

...

usersync 
cd 

...

ranger­usersyncb. Now let’s edit the install.properties file. Here are the relevant lines that you should edit:

c. Now install the usersync by running the setup command 

Code Block
linenumberstrue
export JAVA_HOME=/usr/lib/jvm/java­-1.7.0­-openjdk-

...

­amd64 
./setup.

...

sh 

After installing ranger­usersync, follow the same steps to start/stop services of usersync work.

...


./ranger­-usersync­-services.sh start 

...


Configuring Ranger User-Sync process to use LDAP/AD server:

...

Installing Apache  Hadoop:

    • Now let’s download and install hadoop. Following the excellent instructions available on the hadoop site itself. Follow steps given in pseudo distributed mode.

    • These instructions were written for version 2.7.0. So grab that tar (hadoop­2.7.0.tar.gz) and checksum file (hadoop­2.7.0.tar.gz.mds).

  • Instructions on this page ask that java be installed. If java is not there, install JDK first.

    Code Block
    sudo yum install java-­1.7.0-­openjdk­-devel
  • Make note of the location where you installed hadoop. Here I assume that you have installed it in

    Code Block
    /usr/local/hadoop
  • Create a user under which we could install and ultimately run the various hadoop processes. And login as that user.

    Code Block
    sudo useradd ­­--home-­dir /var/hadoop --­­create-­home --­­shell /bin/bash ­­--user-­group hadoop

     

...

    •  if you get below given message then try next command  

 

Code Block
sudo useradd --­­home-­dir /var/hadoop ­­--create-­home --­­shell /bin/bash hadoop -­g hadoop

...

Code Block
sudo tar 

...

­zxvf ~/dev/hadoop-­2.7.0.tar.gz -­C /usr/local

...


cd /usr/local 

...


sudo ln ­-s hadoop­-2.7.0 hadoop 

...


sudo chown hadoop -­R hadoop hadoop-­2.7.0

...


sudo chgrp hadoop ­-R hadoop hadoop­-2.7.

...

0 


  • TO ADD HDFS USER

     

    Code Block
      useradd hdfs 
    
      to check whether user hadoop login works, try: ­ sudo su ­ hadoop

       TO ADD HDFS USER

  • useradd hdfs 
  • to check whether user hadoop login works, try: ­ sudo su ­ hadoop 


  • Enabling Ranger HDFS Plugins:

a. We’ll start by extracting our build at the appropriate place (/usr/local).

Code Block
cd /usr/local

...


sudo tar zxvf ~/dev/incubator-­ranger/target/ranger-­0.5.0-­hdfs­-plugin.tar.gz 

...


sudo ln -­s ranger-­0.5.0-­hdfs-­plugin ranger-­hdfs-­plugin

...


cd ranger-­hdfs-

...

­plugin 
    • b. Now let’s edit the install.properties file. Here are the relevant lines that you should edit:
        • Change the install.properties file 
PROPERTYVALUE
POLICY_MGR_URL


http://localhost:6080

REPOSITORY_NAMEhadoopdev 
XAAUDIT.DB.IS_ENABLEDtrue
XAAUDIT.DB.FLAVOURMYSQL
XAAUDIT.DB.HOSTNAMElocalhost 
XAAUDIT.DB.DATABASE_NAMEranger_audit 
XAAUDIT.DB.USER_NAMErangerlogger 
XAAUDIT.DB.PASSWORDrangerlogger 

 

c. Now enable the hdfs­-plugin by running the enable-­hdfs-­plugin.sh command (Remember to set JAVA_HOME)

Note
Note: Hadoop conf and hadoop lib folder are not found at expected locations as per the script because of which Ranger hdfs plugin installation fails. To resolve this issue create a symlink as conf dir of hadoop linking to hadoop conf dir

...

Code Block

...

cd /usr/local/hadoop
ln -­s /usr/local/hadoop/etc/hadoop/ /usr/local/hadoop/

...

conf 

 

        • Export HADOOP_HOME to bashrc 

                    --> echo "export HADOOP_HOME=/usr/local/hadoop" >> /etc/bashrc 
        • cd /usr/local/ranger-­hdfs-­plugin 
        • ./ enable-­hdfs-­plugin.sh
        • One more change that we need to do is copy all the jar files from ${hadoop_home}/lib

                    
    • -->
        •  cp /usr/local/hadoop/lib/*.jar /usr/local/hadoop/share/hadoop/hdfs/lib/

 

  • Provide required permission to logs directory 

...

Code Block
 

...

 

...

 

...

chown root:hadoop /usr/local/hadoop/

...

logs 

...


 

...

 

...

 chmod g+w /usr/local/hadoop/

...

logs 

 

 

 

  • Provide required permission to users in OS file system and hdfs
  • file system
  • file system according to your environment and requirement.

d.  Once these changes are done Restart hadoop.

 

Stop NameNode, SecondaryNameNode and DataNode daemon: 
Code Block

...

 

...

 

...

su ­-l hdfs -­c "/usr/local/hadoop/sbin/hadoop­-daemon.sh 

...

stop namenode"
  su ­-l hdfs -­c "/usr/local/hadoop/sbin/hadoop-­daemon.sh stop 

...

secondarynamenode"

...


 

...

 

...

su ­-l hdfs -­c "/usr/local/hadoop/sbin/hadoop-­daemon.sh 

...

stop datanode"

 

 

 

  • Start
  • NameNode,
  • SecondaryNameNode
  • and
  • DataNode
  • daemon: 
Code Block

...

 

...

 su 

...

­-l hdfs -­c "/usr/local/hadoop/sbin/hadoop­-daemon.sh start 

...

namenode"

...


 

...

 

...

su ­l hdfs ­c "/usr/local/hadoop/sbin/hadoop­-daemon.sh start 

...

secondarynamenode"

...


  su ­l hdfs ­c "/usr/local/hadoop/sbin/hadoop­-daemon.sh start 

...

secondarynamenode"

 

 

e. This should start the association of ranger­hdfs­plugin with hadoop. 

 

You can verify by logging into the Ranger Admin Web interface ­> Audit > Agents.

 

Installing Apache Hive(1.2.0):

Let’s download and install apache hive . Following the

...

excellent instructions available on the apache hive site itself


Code Block
sudo tar xzvf ~/dev/apache-­hive-­1.2.0-­bin.tar.gz ­-C /usr/local
sudo tar xzvf ~/dev/apache-­hive-­1.2.0-­bin.tar.gz ­-C /usr/local

...


cd /usr/

...

local 
sudo ln ­-s apache-­hive-­1.2.0-­bin hive

...


useradd hive

...


cd

...

 hive 
 
Export HIVE_HOME to 

...

bashrc 
echo "export HIVE_HOME=/usr/local/hive" >> /etc/bashrc 

...

 
Note
Note:HiveServer2 doesn’t start unless HADOOP_VERSION is exported to bashrc

 

 

Enabling Ranger Hive Plugin:

  • We’ll start by extracting our build at the appropriate place. 

 

 

Code Block
cd /usr/local 

...


sudo tar zxvf ~/dev/incubator­-ranger/target/ranger-­0.5.0-­hive­-plugin.tar.gz 

...


sudo ln -­s ranger-­0.5.0-­hive-­plugin ranger-­hive-­plugin

...


cd ranger­-hive-

...

­plugin  

 

  • Now let’s edit the install.properties file. Here are the relevant lines that you should edit:
  1. Change the insall.properties file 

    PROPERTYVALUE
    POLICY_MGR_URL


    http://localhost:6080

    REPOSITORY_NAMEhivedev 
    XAAUDIT.DB.IS_ENABLED true
    XAAUDIT.DB.FLAVOUR=MYSQL MYSQL
    XAAUDIT.DB.HOSTNAMElocalhost
    XAAUDIT.DB.DATABASE_NAMEranger_audit
    XAAUDIT.DB.USER_NAME rangerlogger
    XAAUDIT.DB.PASSWORDrangerlogger

c. Now enable the hive­-plugin by running the enable-­hive­-plugin.sh command (Remember to set JAVA_HOME)

 

 
Code Block
cd /usr/local/ranger­-hive-­plugin

...


./enable­-hive-­plugin.

...

sh 

 

 

d. Once these changes are done Restart hive. This should start the association of ranger-­hive-­plugin with hive.

 

You can verify by logging into the Ranger Admin Web interface ­> Audit Tab ­> Agents

 

e. Provide required permission to users in OS file system and hdfs file system according to your environment and requirement..

 

Note

NOTES: If /var/log/hive directory does not exist then create one and assign to user hive. 

mkdir /var/log/

...

hive
chown -­R hive: hive /var/log/hive

 

 

*Change properties file permission for hive user.

 

chown -­R hive:hadoop /usr/local/apache-­hive-­1.2.0-­bin/conf/hiveserver2-­site.xml 

 

 

chown ­R hive:hadoop /usr/local/apache-­hive­-1.2.0­-bin/conf/hive-­log4j.properties

 

 

chown ­R hive:hadoop /usr/local/apache­-hive­-1.2.0­-bin/conf/hive­-site.xml 

 

 

To start hive metastore :

Code Block
su ­-l hive -­c "env HADOOP_HOME=/usr/local/hadoop JAVA_HOME=/usr/lib/jvm/java-­1.7.0-­openjdk.x86_64 nohup hive --­­service metastore > /var/log/hive/hive.out 2> /var/log/hive/hive.log &”

 

To start Hive server2 :

Code Block
su -­l hive -­c "env HADOOP_HOME=/usr/local/hadoop JAVA_HOME=/usr/lib/jvm/java­-1.7.0-­openjdk.x86_64 nohup /usr/local/hive/bin/hiveserver2 ­hiveconf hive.metastore.uris=\" \" > /var/log/hive/hiveServer2.out 2>/var/log/hive/hiveServer2.log &”


To Stop: 

Code Block
ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1


 

To Login in Hive shell: 

Code Block
/usr/local/hive/bin/beeline ­-u "jdbc:hive2://localhost:10000" -­n rituser ­-p rituser



 

If hive metastore and hiveserver2 do not start then update below given key-­values according to your environment in following files.

 

hiveserver2­site.xml  

<configuration>
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.security.authorization.manager</name>

<value>org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer
Factory</value>
</property>
<property>
<name>hive.security.authenticator.manager</name>

<value>org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator</v
alue>

</property>
<property>
<name>hive.conf.restricted.list</name>

<value>hive.security.authorization.enabled,hive.security.authorization.manage
r,hive.security.authenticator.manager</value>
</property>
</configuration>


hive­site.xml 

 

 

 

<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
</property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive</value>
<property>
</property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hive_resources</value>
<property>
</property>
<name>hive.scratch.dir.permission</name>

<value>733</value>
<property>
</property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<property>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>localhost</value>
</property>
<property>

 

Installing Apache Hbase (1.1.0.1)

 

Let’s download and install apache Hbase . Following the excellent instructions available on he apache Hbase site itself.

 

 

sudo tar xzvf ~/dev/hbase­-1.1.0.1-­bin.tar.gz -­C /usr/local

 

 

cd /usr/local

 

 

sudo ln ­-s hbase-­1.1.0.1 hbase

 

 

useradd hbase 

 

 

cd hbase 

 

 

Export HBASE_HOME to bashrc 

 

 

 echo "export HBASE_HOME=/usr/local/hbase" >> /etc/bashrc 

 

 

For HBase 0.98.5 and later, you are required to set the JAVA_HOME environment variable before starting HBase

 

Enabling Ranger Hbase Plugins :

We’ll start by extracting our build at the appropriate place.

 

cd /usr/local 

 

 

sudo tar zxvf ~/dev/incubator-­ranger/target/ranger­-0.5.0-­hbase­-plugin.tar.

...

gz  

 

 

sudo ln -­s ranger-­0.5.0-­hbase-­plugin ranger-­hbase-­plugin

 

 

cd ranger­-hbase­-plugin 

 

 

 

 

 

Now let’s edit the install.properties file. Here are the relevant lines that you should edit:

 

Change the insall.properties file 

PROPERTYVALUE
POLICY_MGR_URL

http://localhost:6080

REPOSITORY_NAMEhbasedev 
XAAUDIT.DB.IS_ENABLEDtrue 
XAAUDIT.DB.FLAVOURMYSQL
XAAUDIT.DB.HOSTNAMElocalhost
XAAUDIT.DB.DATABASE_NAMEranger_audit
XAAUDIT.DB.USER_NAMErangerlogger
XAAUDIT.DB.PASSWORDrangerlogger

c. Now enable the hbase­-plugin by running the enable­-hbase-­plugin.sh command (Remember to set JAVA_HOME)

 

cd /usr/local/ranger-­hbase-­plugin 

 

 

./enable-­hbase-­plugin.sh

 

d. Once these changes are done Restart hbase. This should start the association of ranger-­hbase-­plugin with hbase.

 

You can verify by logging into the Ranger Admin Web interface ­> Audit Tab ­> Agents

 

e. To Stop master and regionserver try: 

 

/usr/local/hbase/bin/hbase­-daemon.sh stop master 

 

 

/usr/local/hbase/bin/hbase­-daemon.sh stop regionserver 

 

 

g. Provide required permission to users in OS file system and hdfs file system according to your environment and requirement.

 

Installing Apache Knox Gateway:

 

Let’s download and install apache Knox from Apache Mirrors.

 

 

sudo tar ­-zxvf ~/dev/knox-­0.6.0.-tar.gz -C /usr/local

 

 

cd /usr/local 

 

 

sudo ln -­s knox-­0.6.0 knox

 

 

cd knox

 

    2. Following the instructions available on the apache knox site itself (To install  Apache Knox Gateway). 

 

Knox Master Secret : knox 

 

Enabling Range Knox Plugins:

We’ll start by extracting our build at the appropriate place. 

cd /usr/local

 

tar ­-zxvf ~/dev/incubator-­ranger/target/ranger-­0.5.0-­knox­-plugin.tar.gz

 

sudo ln -­s ranger-­0.5.0-­knox­-plugin ranger-­knox-­plugin

cd ranger­-knox-­plugin 

Now let’s edit the install.properties file. Here are the relevant lines that you should edit:

 

 

Change the insall.properties file 

PROPERYVALUE
POLICY_MGR_URL

http://localhost:6080

REPOSITORY_NAMEknoxdev
KNOX_HOME/usr/local/knox
XAAUDIT.DB.IS_ENABLEDtrue
XAAUDIT.DB.HOSTNAMElocalhost 
XAAUDIT.DB.DATABASE_NAMEranger 
XAAUDIT.DB.USER_NAMErangerlogger 
XAAUDIT.DB.PASSWORDrangerlogger 

 

 

 

Now enable the knox­plugin by running the enable-­knox-­plugin.sh command (Remember to set JAVA_HOME)

 

 

cd /usr/local/ranger-­knox-­plugin

 

 

./enable-­knox-­plugin.sh

 

Once these changes are done Restart Knox ( Gateway / LDAP ) 

if you get permission denied error during knox start please provide required privileges to knox user. for example : 

 

chown ­R knox:knox /usr/local/knox/data 

 

 

chown ­R knox:knox /usr/local/knox/logs

 

 

chown ­R knox:knox /usr/local/knox/pids 

 

 

chown ­R knox:hadoop /usr/local/knox/pids/*

 

 

 

You can verify by logging into the Ranger Admin Web interface ­> Audit > Agents

 

Trusting Self Signed Knox Certificate:

       When Knox is listening on its SSL port with self signed certificate, you have to import SSL certificate of Knox into truststore used by XA PolicyManager. Here are steps for importing Knox SSL certificate in truststore used by XA PolicyManager.

 

Log in the machine running Knox

Export knox certificate

cd $GATEWAY_HOME/data/security/keystores 

This is typically /usr/local/knox/data/security/keystores on Linux machine. 

 

keytool -­exportcert ­-alias gateway-­identity -­keystore gateway.jks -­file knox.crt

 

 

 

Copy knox.crt file onto machine running Ranger Admin/PolicyManager to a working directory, for example /usr/local/ranger-­admin

 

 

Replicate cacerts

 

 

cd /usr/local/ranger-­admin

 

 

cp $JAVA_HOME/jre/lib/security/cacerts cacertswithknox

 

    5. Import Knox certificate into the replicated new keystore

 

keytool -­import -­trustcacerts -­file <knox.crt created above> -­alias knox -keystore cacertswithknox

 

 

password: changeit

 

    6. Edit /usr/local/ranger-­admin/ews/ranger-­admin-­services.sh

 

Add parameter -­Djavax.net.ssl.trustStore=<path to the cacertswithknox> to the java call in the script.

 

    7. Restart Ranger Admin/PolicyManager. 

Installing Apache Storm (0.10.0):

 

Let’s download and install apache Storm from Apache Mirrors

 

 

sudo tar ­-zxvf ~/dev/apache­-storm-­0.10.0-­beta1.tar.gz -C /usr/local

 

 

cd /usr/local

 

 

sudo ln ­-s apache-­storm­-0.10.0­beta1 storm 

 

 

cd storm 

 

 

   2. Following the instructions available on the apache storm site itself(To install Apache Storm).

Enabling Ranger Storm Plugins:

We’ll start by extracting our build at the appropriate place. 

 

cd /usr/local  

 

 

tar ­zxvf ~/dev/incubator-­ranger/target/ranger-­0.5.0-­storm-­plugin.tar.gz

 

 

sudo ln -­s ranger-­0.5.0-­storm-­plugin ranger-­storm-­plugin 

 

 

    2. Now let’s edit the install.properties file. Here are the relevant lines that you should edit:

 

Change the insall.properties file 

PROPERTYVALUE
POLICY_MGR_URL

http://localhost:6080

REPOSITORY_NAMEstormdev 
XAAUDIT.DB.IS_ENABLEDtrue
XAAUDIT.DB.HOSTNAMElocalhost 
XAAUDIT.DB.DATABASE_NAMEranger 
XAAUDIT.DB.USER_NAMErangerlogger 
XAAUDIT.DB.PASSWORDXAAUDIT.DB.PASSWORD=rangerlogger

 

 

   3. Now enable the storm-plugin by running the enable-­storm-plugin.sh command (Remember to set JAVA_HOME)

 

cd /usr/local/ranger-­storm-­plugin

 

 

./enable­-storm-­plugin.sh  

 

 

Once these changes are done Restart Storm

 

 

You can verify by logging into the Ranger Admin Web interface ­> Audit > Agents

 

Installing Apache Yarn:

 

You can run a MapReduce job on YARN in a pseudo­distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition

 

 

The following instructions assume that hadoop installations steps mentioned in Installing Apache Hadoop are already executed.

 

Enabling Ranger Yarn Plugin:

We’ll start by extracting our build at the appropriate place (/usr/local). 

 

cd /usr/local

...

  

 

 

sudo tar zxvf ~/dev/incubator-­ranger/target/ranger-­0.5.0-­yarn-­plugin.tar.gz  

 

 

sudo ln -­s ranger-­0.5.0-­yarn-­plugin ranger-­yarn-­plugin 

 

 

cd ranger­-yarn-­plugin

 

    2.  Now let’s edit the install.properties file. Here are the relevant lines that you should edit:

 

Change the install.properties file 

PROPERTYVALUE
POLICY_MGR_URL

http://localhost:6080

REPOSITORY_NAMEyarndev
XAAUDIT.DB.IS_ENABLEDtrue
XAAUDIT.DB.FLAVOURMYSQL
XAAUDIT.DB.HOSTNAMElocalhost
XAAUDIT.DB.DATABASE_NAMEranger_audit 
XAAUDIT.DB.USER_NAMErangerlogger 
XAAUDIT.DB.PASSWORDrangerlogger 

 

 

   3.  Now enable the yarn­-plugin by running the enable-­yarn-­plugin.sh command.

 

cd /usr/local/ranger-­yarn-­plugin

 

 

./ enable-­yarn-­plugin.sh

 

   4.  One more change that we need to do is copy all the jar files from  ${hadoop_home}/lib

 

cp /usr/local/ranger­-yarn-­plugin/lib/*.jar /usr/local/hadoop/share/hadoop/yarn/lib/

 

   5.  if you get permission denied error during yarn start please provide required privileges to yarn user in local and hdfs file system. for example :

 

mkdir /var/log/yarn

 

 

chown ­-R yarn:yarn /var/log/yarn

 

 6.  Once these changes are done Start ResourceManager daemon and NodeManager daemon.

Start the ResourceManager on ResourceManager hosts. 

 

su yarn ­-c "/usr/local/hadoop/sbin/yarn­-daemon.sh start resourcemanager"

 

 

ps ­-ef | grep -­i resourcemanager 

 

Start the NodeManager on NodeManager hosts. 

 

 

su yarn ­-c "/usr/local/hadoop/sbin/yarn-­daemon.sh start nodemanager"

 

 

ps -­ef | grep -­i nodemanager

 

 

Stop the ResourceManager on ResourceManager hosts. 

  • su yarn ­-c "/usr/local/hadoop/sbin/yarn-­daemon.sh stop resourcemanager"
  • ps ­-ef | grep -­i resourcemanager

Stop the NodeManager on NodeManager hosts. 

 

su yarn -­c "/usr/local/hadoop/sbin/yarn-­daemon.sh stop nodemanager"

 

 

ps ­-ef | grep -­i nodemanager 

 

  7.  This should start the association of ranger-­yarn-­plugin with hadoop. 

 

You can verify by logging into the Ranger Admin Web interface ­> Audit > Agents

 

Installing Ranger KMS (0.5.0)

       Prerequisites: (Need to done for all host on which Ranger KMS needs to be installed)

 

Download “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files” zip using below link depending upon the Java version used

 

http://www.oracle.com/technetwork/java/javase/downloads/jce­-7­download-­432124.html 

http://www.oracle.com/technetwork/java/javase/downloads/jce8­download­2133166.html

...

     2.  unzip the above downloaded zip file to java’s security folder (Depending upon the java version used)

Code Block
unzip UnlimitedJCEPolicyJDK7.zip into $JDK_HOME/jre/lib/security unzip jce_policy-­8.zip into $JDK_HOME/jre/lib/security 

3.  STEPS FOR RANGER KMS:

 

We’ll start by extracting our build at the appropriate place(/usr/local). 

 

cd /usr/local  

 

 

sudo tar ­-zxvf ~/dev/incubator­-ranger/target/ranger-­0.5.0-­kms.tar.gz  

 

 

sudo ln ­-s ranger-­0.5.0-­kms ranger-­kms 

 

 

cd ranger-­kms 

 

 

 

 

Please note that Ranger KMS plugin is integrated with Ranger KMS and will be installed automatically when KMS is installed.

 

 

 

Now let’s edit the install.properties file. Here are the relevant lines that you should edit

...

:

 

Change the install.properties file 

DB_FLAVOR 

SQL_CONNECTOR_JAR 

db_root_user 

db_root_password

db_host 

db_name

db_user 

db_password 

 

 

PROPERTYVALUE
POLICY_MGR_URL


http://localhost:6080

REPOSITORY_NAMEkmsdev
KMS_MASTER_KEY_PASSWD  enter master key password
XAAUDIT.DB.IS_ENABLEDtrue
XAAUDIT.DB.FLAVOURMYSQL 
XAAUDIT.DB.HOSTNAMElocalhost 
XAAUDIT.DB.DATABASE_NAMEranger_audit 
XAAUDIT.DB.USER_NAMErangerlogger
XAAUDIT.DB.PASSWORDrangerlogger

 

Edit “hdfs-site.xml”( Need to give provider else it will not support hadoop commands)

 

Replace localhost with <internal host name> 

Go to path cd /usr/local/hadoop/conf/ 

vim hdfs­site.xml 

 

For property “dfs.encryption.key.provider.uri” ,enter the value “kms://http@<internal host name>:9292/kms”

 

 

save and quit 

 

 

 

Edit “core­site.xml”( Need to give provider else it will not support hadoop commands)

 

 

Replace localhost with <internal host name> 

Go to path cd /usr/local/hadoop/conf/ 

vim core­site.xml 

 

For property “hadoop.security.key.provider.path” ,enter the value “kms://http@<internal host name>:9292/kms”

 

 

Once these changes are done Restart hadoop.

 

Stop NameNode, SecondaryNameNode and DataNode daemon: 

 

su -­l hdfs -­c "/usr/local/hadoop/sbin/hadoop­daemon.sh stopnamenode"

 

 

su ­-l hdfs -­c "/usr/local/hadoop/sbin/hadoop­daemon.sh startnamenode"

Run setup  

 

 

./setup.sh

 

 

Start the kms server

 

ranger­-kms start

 

 

You can verify the plugin is communicating to Ranger admin in Audit-­>plugins tab.

 

 

If kmsdev service is not created in Ranger Admin then kms­-plugin will not able to connect to Ranger admin.

 

To Create the Kms service 

 

 

PROPERTYVALUE
REPOSITORY_NAME

name specified in installed.properties (e.g
kmsdev)

KMS URLkms://http@<internal host name>:9292/kms 
Username<username> (for e.g keyadmin)
Password<password> 

 

 

Check Test Connection.

   ENABLING AUDIT LOGGING TO HDFS:

 

To enable Audit to HDFS for a plugin do the below: 

 

set XAAUDIT.HDFS.ENABLE = true for respective component plugin in the install.properties file which may be found in /usr/local/ranger­<component>­plugin/ directory. 

 

 

configure NameNode host in the XAAUDIT.HDFS.HDFS_DIR.

 

 

create a policy in HDFS service from Ranger Admin for individual component users (hive/hbase/knox/storm/yarn/kafka/kms) to give READ+ WRITE permission for the particular audit folder. i.e for enabling Hive component to log Audits to HDFS , we need to create a policy for hiveuser with READ+ WRITE permissions to respective audit directory

 

 

Audit to HDFS caches logs in local directory, which can be specified in XAAUDIT.HDFS.LOCAL_BUFFER_DIRECTORY ( this can be like ‘/var/log/<component>/**), which is the path where audit is stored temporarily, likewise for archived logs we need to update XAAUDIT.HDFS.LOCAL_ARCHIVE_DIRECTORY value ( this can be like ‘/var/log/<component>/**), before enabling the plugin for the component.

 

 

 

Note that, HDFS audit logging is for archive purposes. For seeing audit report in the Ranger Admin UI, recommended option is Solr.

 

    ENABLING AUDIT LOGGING TO SOLR:

 

Set following properties in install.properties of ranger service to work audit to solr in Ranger

PROPERTIESVALUE
audit_storesolr
audit_solr_urls 

http://solr_host:6083/solr/ranger_audits

audit_solr_user ranger_solr
audit_solr_password NONE 


Restart Ranger.

 

 

 

To enable Audit to Solr for a plugin do the below:

 

 

Set following properties in install.properties of plugin to start logging audit to Solr : for eg Hbase

 

 

 

PROPERTYVALUE
XAAUDIT.SOLR.IS_ENABLEDtrue 
XAAUDIT.SOLR.ENABLEtrue 
XAAUDIT.SOLR.URL

http://solr_host:6083/solr/ranger_audits

XAAUDIT.SOLR.USERranger_solr  
XAAUDIT.SOLR.PASSWORDNONE 
XAAUDIT.SOLR.FILE_SPOOL_DIRvar/log/hadoop/hdfs/audit/solr/spool 

 

 

Enable ranger plugin for Hbase. 

Restart Hbase component.