This document describes the steps required to build ApacheTrafodion software.
Supported Platforms
Red Hat or Centos 6.x (6.4 or later) versions are supported as development and production platforms.
Prerequisites
Install the following packages:
sudo yum install epel-release
sudo yum install alsa-lib-devel ant ant-nodeps boost-devel cmake \
device-mapper-multipath dhcp flex gcc-c++ gd git glibc-devel \
glibc-devel.i686 graphviz-perl gzip java-1.7.0-openjdk-devel \
libX11-devel libXau-devel libaio-devel \
libcurl-devel libibcm.i686 libibumad-devel libibumad-devel.i686 \
libiodbc libiodbc-devel librdmacm-devel librdmacm-devel.i686 \
libxml2-devel log4cxx log4cxx-devel lua-devel lzo-minilzo \
net-snmp-devel net-snmp-perl openldap-clients openldap-devel \
openldap-devel.i686 openmotif openssl-devel openssl-devel.i686 \
openssl-static perl-Config-IniFiles perl-Config-Tiny \
perl-DBD-SQLite perl-Expect perl-IO-Tty perl-Math-Calc-Units \
perl-Params-Validate perl-Parse-RecDescent perl-TermReadKey \
perl-Time-HiRes protobuf-compiler protobuf-devel python-qpid \
python-qpid-qmf qpid-cpp-client \
qpid-cpp-client-ssl qpid-cpp-server qpid-cpp-server-ssl \
qpid-qmf qpid-tools readline-devel saslwrapper sqlite-devel \
unixODBC unixODBC-devel uuid-perl wget xerces-c-devel xinetd
Check the following before continuing
Java version is compatible, it must be 1.7.x. To check your version, do "java -version". Ensure JAVA_HOME environment variable exists and set to your JDK installation.
- Verify that Trafodion source is downloaded and un-tarred, or cloned from github
- Download, build and install additional development tools via Additional Build Tools.
- Maven version 3.0.5 or greater is installed and part of your path. To check your version, do "mvn --version".
Check your system limits. Some of them may need to be increased or Trafodion will not start.
The following are the recommended values:
ulimit –a
- core file size (blocks, -c) 1000000
- data seg size (kbytes, -d) unlimited
- scheduling priority (-e) 0
- file size (blocks, -f) unlimited
- pending signals (-i) 515196
- max locked memory (kbytes, -l) 49595556
- max memory size (kbytes, -m) unlimited
- open files (-n) 32000
- pipe size (512 bytes, -p) 8
- POSIX message queues (bytes, -q) 819200
- real-time priority (-r) 0
- stack size (kbytes, -s) 10240
- cpu time (seconds, -t) unlimited
- max user processes (-u) 267263
- virtual memory (kbytes, -v) unlimited
- file locks (-x) unlimited
Build Steps
Set up Trafodion configuration file
Additional development tools are required before building Trafodion as described Additional Build Tools. A convenience script script exists that downloads, installs, and builds all these tools in a common directory. Trafodion is up and running, you can now start up a SQL command line interface and initialize Trafodion.
If this convenience script is not used or if any of these additional build tools are not found in the expected location, then the Trafodion configuration file needs to be updated. The Trafodion configuration file template is located in <download directory>/apache-trafodion-1.3.0-incubating/core/sqf/LocalSettingsTemplate.sh. To change values, copy this file to your home directory and change its name to .trafodion. Edit the .trafodion file and update according to the instructions. Be sure to change the location of your TOOLSDIR to your <tools installation directory>.
cp <download directory>/apache-trafodion-1.3.0-incubating/core/sqf/LocalSettingsTemplate.sh ~/.trafodion
Build Trafodion
Start a new ssh session
cd <download directory>/apache-trafodion-1.3.0-incubating
export $TOOLSDIR=<tools installation directory>
source ./env.sh
Build using one of the following options:
make all
(Build Trafodion, DCS, REST) OR
make package
(Build Trafodion, DCS, REST, Client drivers) OR
make package-all
(Build Trafodion, DCS, REST, Client drivers and Tests for all components
Verify build by executing the sqvers request with the -u option. It reports that seven jar files exist.
sqvers -u
Prepare the test environment
It is recommended that you test Trafodion by setting up a local version of Hadoop and installing Trafodion on top. This is enabled by running the 'install_local_hadoop' script.
Run install_local_hadoop
This section describes the steps to use the Trafodion installation script called 'install_local_hadoop' that downloads compatible versions of Hadoop, HBase, Hive, and MySql and starts Trafodion.
make sure you have set up password less authentication. You should be able to "ssh localhost" without having to enter a password
If you started a new ssh session, be sure to
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
Install hadoop
cd $MY_SQROOT/sql/scripts
install_local_hadoop
./install_traf_components
Verify that the install completed by running the following command - should report: 6 java servers and 2 mysqld processes are running
swstatus
Note:
The 'install_local_hadoop' script downloads Hadoop, HBase, Hive, and MySql jar files from the internet. To avoid this overhead, you can download the required files into a separate directory and set the environment variable MY_LOCAL_SW_DIST to point to this directory.
The following options are available with 'install_local_hadoop'. Use the -p option if the default Hadoop ports are already in use on your machine:
‘install_local_hadoop' — will use default port numbers for all services OR
'install_local_hadoop -p fromDisplay' - will start Hadoop with a port number range determined from the DISPLAY environment variable OR
‘install_local_hadoop -p rand' — will start with any random port number range between 9000 and 49000 OR
‘install_local_hadoop -p < specify a port # >' — will start with port number specified
If you don't specify the -p option, the following default ports are used:
MY_DCS_MASTER_INFO_PORT=24400
MY_DCS_SERVER_INFO_PORT=24410
MY_REST_SERVER_PORT=4200
MY_REST_SERVER_SECURE_PORT=4201
When this script completes, Hadoop, HBase, Hive, and MySql (used as Hive's metadata repository) have been installed and are started.
To start/stop/check Hadoop environment using Trafodion supplied scripts, you can execute ‘swstartall’ , ‘swstopall’ and ‘swstatus’, and if you need to remove the installation, execute the 'swuninstall_local_hadoop'.
Use pre-installed Hadoop
If you want to use an already installed version of Hadoop, you should build binary tar files and then install Trafodion following instructions described in Installation .
To build binary files
cd <download directory>/apache-trafodion-1.3.0-incubating
make package
Your binary tar files will be created in <download directory>/apache-trafodion-1.3.0-incubating/distribution directory.
Run Trafodion
This section describes how to start Trafodion and run operations.
Setup required each time source is downloaded
Start a new ssh session and set up environment:
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
cd $MY_SQROOT/etc
# delete ms.env, if it exists
rm ms.env
cd $MY_SQROOT/sql/scripts
sqgen
Start up Trafodion
cd $MY_SQROOT/sql/scripts
sqstart
sqcheck
Note: In case of any issues and if there is a need to stop and restart a specific Trafodion component, you can use the component based start/stop scripts.
Component | Start script | Stop script |
For all of Trafodion | sqstart | sqstop |
For DCS (Database Connectivity Service) | dcsstart | dcsstop |
For REST server | reststart | reststop |
For LOB server | lobstart | lobstop |
For RMS server | rmsstart | rmsstop |
|
|
|
Checking the status of Trafodion and its components
There are several health check scripts that are available which will provide the status of Trafodion. They are :
sqcheck
(For all of Trafodion)dcscheck
(For Database Connectivity Service)rmscheck
(For RMS Server)
Create Trafodion metadata
If you started a new ssh session, be sure to
cd <download directory>/apache-trafodion-1.3.0-incubating
source ./env.sh
Trafodion is up and running, you can now start up a SQL command line interface and initialize Trafodion
sqlci
Perform the following statements:
initialize trafodion;
exit
Validate
Test your setup by using "sqlci" or "trafci" (uses DCS to connect to the SQL engine):
get schemas;
create table table1 (a int);
invoke table1;
insert into table1 values (1), (2), (3), (4);
select * from table1;
exit;
You are done and ready to go!