Describes the steps required to build Apache Trafodion.
Supported Platforms
Red Hat or Centos 6.x (6.4 or later) versions are supported as development and production platforms.
Prerequisites
You need to install the following packages before you can install Apache Trafodion.
sudo yum install epel-release sudo yum install alsa-lib-devel ant ant-nodeps boost-devel cmake \ device-mapper-multipath dhcp flex gcc-c++ gd git glibc-devel \ glibc-devel.i686 graphviz-perl gzip java-1.7.0-openjdk-devel \ libX11-devel libXau-devel libaio-devel \ libcurl-devel libibcm.i686 libibumad-devel libibumad-devel.i686 \ libiodbc libiodbc-devel librdmacm-devel librdmacm-devel.i686 \ libxml2-devel log4cxx log4cxx-devel lua-devel lzo-minilzo \ net-snmp-devel net-snmp-perl openldap-clients openldap-devel \ openldap-devel.i686 openmotif openssl-devel openssl-devel.i686 \ openssl-static perl-Config-IniFiles perl-Config-Tiny \ perl-DBD-SQLite perl-Expect perl-IO-Tty perl-Math-Calc-Units \ perl-Params-Validate perl-Parse-RecDescent perl-TermReadKey \ perl-Time-HiRes protobuf-compiler protobuf-devel python-qpid \ python-qpid-qmf qpid-cpp-client \ qpid-cpp-client-ssl qpid-cpp-server qpid-cpp-server-ssl \ qpid-qmf qpid-tools readline-devel saslwrapper sqlite-devel \ unixODBC unixODBC-devel uuid-perl wget xerces-c-devel xinetd
Once installed, check the following.
Java Version
The Java version must be 1.7.x. Check as following:
$ java -version java version "1.7.0_85" OpenJDK Runtime Environment (rhel-2.6.1.3.el6_6-x86_64 u85-b01) OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)
Ensure that the Java environment exists and points to your JDK installation.
$ echo $JAVA_HOME $ which java /usr/bin/java $ export JAVA_HOME=/usr/bin $
You should export JAVA_HOME in your .bashrc or .profile file.
Verify Trafodion Download
Verify that the Trafodion source has been either:
- Downloaded and unpackaged.
- Cloned from github.
If not, please do so now. Refer to Contributor Workflow - Code/Docs.
Install Required Build Tools
Refer to Required Build Tools for instructions.
Verify Maven Version
The Trafodion build environment requires Maven 3.0.5 or later.
$ maven --version
Verify System Limits
Please check that the system limits in your environment are appropriate for Apache Trafodion. If they are not, then you will need to increase the limits or Trafodion cannot start.
ulimit –a core file size (blocks, -c) 1000000 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515196 max locked memory (kbytes, -l) 49595556 max memory size (kbytes, -m) unlimited open files (-n) 32000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 267263 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Please refer to this article for information on how you change system limits.
Build Trafodion
Start a new ssh session. Use the following commands to set up the Trafodion environmental variables.
cd <download directory>/apache-trafodion-1.3.0-incubating export TOOLSDIR=<tools installation directory> source ./env.sh
if you do not set up TOOLSDIR before sourcing in env.sh, then tools location defaults to /opt/home/tools. You may want to edit your .bashrc or .profile file to always export TOOLSDIR.
Build using one of the following options:
make all
(Build Trafodion, DCS, REST) OR
make package
(Build Trafodion, DCS, REST, Client drivers) OR
make package-all
(Build Trafodion, DCS, REST, Client drivers and Tests for all components)
Verify build by executing the sqvers request with the -u option. It reports that seven jar files exist.
sqvers -u
Prepare the test environment
It is recommended that you test Trafodion by setting up a local version of Hadoop and installing Trafodion on top. This is enabled by running the 'install_local_hadoop' script.
Run install_local_hadoop
This section describes the steps to use the Trafodion installation script called 'install_local_hadoop' that downloads compatible versions of Hadoop, HBase, Hive, and MySql and starts Trafodion.
If you started a new ssh session, be sure to
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
Make sure you have set up password less authentication. You should be able to "ssh localhost" without having to enter a password. To setup passwordless authentication:
ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
Install hadoop
cd $MY_SQROOT/sql/scripts
install_local_hadoop
./install_traf_components
Verify that the install completed by running the following command - should report: 6 java servers and 2 mysqld processes are running
swstatus
Note:
The 'install_local_hadoop' script downloads Hadoop, HBase, Hive, and MySql jar files from the internet. To avoid this overhead, you can download the required files into a separate directory and set the environment variable MY_LOCAL_SW_DIST to point to this directory.
The following options are available with 'install_local_hadoop'. Use the -p option if the default Hadoop ports are already in use on your machine:
‘install_local_hadoop' — will use default port numbers for all services OR
'install_local_hadoop -p fromDisplay' - will start Hadoop with a port number range determined from the DISPLAY environment variable OR
‘install_local_hadoop -p rand' — will start with any random port number range between 9000 and 49000 OR
‘install_local_hadoop -p < specify a port # >' — will start with port number specified
If you don't specify the -p option, the following default ports are used:
MY_DCS_MASTER_PORT=23400
MY_DCS_MASTER_INFO_PORT=24400
MY_REST_SERVER_PORT=4200
MY_REST_SERVER_SECURE_PORT=4201
When this script completes, Hadoop, HBase, Hive, and MySql (used as Hive's metadata repository) have been installed and are started.
To start/stop/check Hadoop environment using Trafodion supplied scripts, you can execute ‘swstartall’ , ‘swstopall’ and ‘swstatus’, and if you need to remove the installation, execute the 'swuninstall_local_hadoop'.
Use pre-installed Hadoop
If you want to use an already installed version of Hadoop, you should build binary tar files and then install Trafodion following instructions described in Installation .
To build binary files
cd <download directory>/apache-trafodion-1.3.0-incubating
make package
Your binary tar files will be created in <download directory>/apache-trafodion-1.3.0-incubating/distribution directory.
Run Trafodion
This section describes how to start Trafodion and run operations.
Setup required each time source is downloaded
Start a new ssh session and set up environment:
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
cd $MY_SQROOT/etc
# delete ms.env, if it exists
rm ms.env
cd $MY_SQROOT/sql/scripts
sqgen
Start up Trafodion
cd $MY_SQROOT/sql/scripts
sqstart
sqcheck
Note: In case of any issues and if there is a need to stop and restart a specific Trafodion component, you can use the component based start/stop scripts.
Component | Start script | Stop script |
For all of Trafodion | sqstart | sqstop |
For DCS (Database Connectivity Service) | dcsstart | dcsstop |
For REST server | reststart | reststop |
For LOB server | lobstart | lobstop |
For RMS server | rmsstart | rmsstop |
|
|
|
Checking the status of Trafodion and its components
There are several health check scripts that are available which will provide the status of Trafodion. They are :
sqcheck
(For all of Trafodion)dcscheck
(For Database Connectivity Service)rmscheck
(For RMS Server)
Create Trafodion metadata
If you started a new ssh session, be sure to
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
Trafodion is up and running, you can now start up a SQL command line interface and initialize Trafodion
sqlci
Perform the following statements:
initialize trafodion;
exit
Validate
Test your setup by using "sqlci" or "trafci" (uses DCS to connect to the SQL engine):
get schemas;
create table table1 (a int);
invoke table1;
insert into table1 values (1), (2), (3), (4);
select * from table1;
exit;
You are done and ready to go!