The structure and execution of Trafodion test suites varies by component.




The DCS (Database Connectivity Services) code is written in Java, and is built and unit tested using Maven. The test suite organization and use follow Maven standards.




The core components are written in a combination of C++ and Java. Each time you add a new feature to SQL or modify an existing feature, you want to ensure that the current set of regression tests pass. You may want to ensure coverage for your new feature by ensuring your new feature is covered or may want to add to an existing test. Below are some details on running the developer regression tests.


Running the full suite


Location of the tests :




Under this directory you will find several component directories eg : core, qat, compGeneral and executor. Under directory $MY_SQROOT/../sql/regress/tools, you find the driver scripts that run each of these suites. "runallsb" is the driver script that runs all the regressions suits. The script uses the sqlci command interface to run SQL commands.


To run the regression test suite, do the following:


cd $MY_SQROOT/../sql/regress


(The above assumes that you have already run the script, which sets the $MY_SQROOT environment variable.)


This suite tests the SQL engine (Compiler and Executor) as well as the transaction and foundation layers. On completion, it prints out a test summary. All tests should pass, or pass with known diffs.


Another way to check the test results after the fact is:


cd $MY_SQROOT/rundir
grep FAIL */runregr-sb.log


Under each component directory you will see a file called runregr-sb.log where that component's regress result is stored. What you see on your terminal at the end of a regression run is a "cat" of all these component files.


A successful test run shows no failures.


Running an individual suite/suites


cd $MY_SQROOT/../sql/regress
tools/runallsb <suite1> <suite2>


Running an individual test


  1. If you have already run the suite once you will have all your diretories set up and you can run one test as follows :


cd $MY_SQROOT/../sql/regress/<suite>
export rundir=$MY_SQROOT/rundir
export scriptsdir=$MY_SQROOT/../sql/regress
(you can add the 2 exports into your .bashrc for convenience)
cd $rundir/<suite>
$scriptsdir/<suite>/runregr -sb <test>


 eg :
   cd $rundir/executor
   $scriptsdir/executor/runregr -sb TEST130


  1. If you have not run any regression suites so far you will not have the required sub directories set up. You manually create them for each suite you want to run.


mkdir $rundir, 
cd $rundir
mkdir <suitename >   // suitename should match the name of each directory in $scriptsdir


Then continue on with steps as listed above in 1. 


Detecting Failures


If you see failures in any of your tests, you want to try running that suite or test individually as detailed above.


Open up the DIFF file and correlate them to the LOG and EXPECTED files.


  • DIFF files are in $rundir/suite name>
  • LOG files are in $rundir/<suite name>
  • EXPECTED files are in $scriptsdir/<suite name>


To narrow down the failure, open up the testfile eg TEST130 on $scriptsdir/executor. Recreate the problem with a smaller set of SQL commands and create a script to run from sqlci. If it's an issue that can be recreated only by running the whole suite, you can add a line to the test just before the command that fails to include a "wait" or a "sleep" "sh sleep 60" will make the test pause and give you time to attach the sqlci process to the debugger. (You can find the pid of the sqlci process using sqps on the command line)


Introducing a "wait" in the test will wait forever until you enter a character. This is another way to make the test pause to attach the debugger to the sqlci process.


Modifying an existing regression test


If you would like to add coverage for your new change, you can modify an existing test. Run the test after your modifications. If you are satisfied with your results, you need to modify the EXPECTED<test number> file to reflect your new change. Standard way to do it is to copy the LOG<test number> file to EXPECTED< test number> file.




The Phoenix tests were originally adapted from their counterpart at The Trafodion architects in the early days felt that we could use this set of basic functional tests to iron out functional issues in Trafodion.


Just like its original version, our Trafodion version of Phoenix tests are also open-sourced (bearing both and HP copyright headers). They can be downloaded by anyone who wants to run them. The tests are executed using maven with a python wrapper; both are standard test execution mechanisms in the open-source world. But you really don’t need to know either of them to run phoenix tests. You can run them the same way on your own workstation instance just like the way Jenkins runs them. Here are the simple 1-2-3 steps:


  1. Prior to running Phoenix tests, you need to bring up your Trafodion instance and DCS. You need to configure at least 2-4 servers for DCS. The tests need at least 2 mxosrvrs as they make 2 connections at any given time. But we do recommend configuring DCS with 4 mxosrvrs. We have seen situations that mxosrvrs do not get released in time for the next connection if there are only 2 mxosrvrs.
  2. Run phoenix tests from source tree:

    cd tests/phx --target=<host>:<port> --user=dontcare --pw=dontcare --targettype=TR --javahome=<jdk> --jdbccp=<jdir>/jdbcT4.jar
    <host>: your workstation name or IP address
    <port>: your DCS master port number
    <jdk>: the directory containing the jdk1.7.0_21_64 or later version of the JDK
    <jdir>: the directory containing your JDBC T4 jar file (export/lib if you downloaded a Trafodion binary package)
    If you only need to run a particular test, README.rst also has instructions on how to do that.

  3. Analyze the results. The test results can be found in phoenix_test/target/surefire-reports. If there are any failures, they would come with file names and line numbers. The source code can be found in phoenix_test/src/test/java/com/hp/phoenix/end2end. These are JDBC tests written in java.


JDBC T4 Tests


The code is written in Java, and is built and unit tested using Maven. The test suite organization and use follow Maven standards. Instructions for setting up and running the test can be found in source tree at dcs/src/test/jdbc_test


PyODBC Tests


The code is written for the Python 2.7 unittest framework. It is run via the Testr and Tox. Instructions for setting up and running the test can be found in source tree at dcs/src/test/pytests


  • No labels