You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Introduction

Bigtop is based on iTest which has a clear separation between test artifacts and test execution. Test artifacts happen to be arbitrary Maven artifacts with classes containing @Test methods. Test execution phase is being driven by maven pom.xml files where you'd define dependencies on test artifacts that you would like to execute and bind everything to the maven-failsafe-plugin's verify goal. 

These tests can also be run with a new feature (pending BIGTOP-1388) - cluster failure tests, which is explained at the end of this page.

Note:  After BIGTOP-1222 There is now a simple way to run smoke tests for your hadoop cluster in bigtop, which doesnt require jar files or maven. For running smoke tests to validate your cluster, see the README of the bigtop-tests/smoke-tests directory.

Running existing unit tests

  • Make sure that you have the latest Maven installed (3.0.4)
  • Make sure that you have the following defined in your environment:

    export JAVA_HOME=/usr/java/latest
    export HADOOP_HOME=/usr/lib/hadoop
    export HADOOP_CONF_DIR=/etc/hadoop/conf
    export HBASE_HOME=/usr/lib/hbase
    export HBASE_CONF_DIR=/etc/hbase/conf
    export ZOOKEEPER_HOME=/usr/lib/zookeeper
    export HIVE_HOME=/usr/lib/hive
    export PIG_HOME=/usr/lib/pig
    export FLUME_HOME=/usr/lib/flume
    export SQOOP_HOME=/usr/lib/sqoop
    export HCAT_HOME=/usr/lib/hcatalog
    export OOZIE_URL=http://localhost:11000/oozie
    export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
    
  • Given the on-going issues with Apache Jenkins builds you might need to deploy everything locally:

    mvn -f bigtop-test-framework/pom.xml -DskipTests install
    mvn -f bigtop-tests/test-execution/conf/pom.xml install
    mvn -f bigtop-tests/test-execution/common/pom.xml install 
    mvn -f bigtop-tests/test-artifacts/pom.xml install
    
  • Start test execution:

    cd bigtop-tests/test-execution/smokes/<subsystem>
    mvn verify
    
  • Cluster Failure Tests:

    Cluster failures are handled by three classes - ServiceKilledFailure.groovy, ServiceRestartFailure.groovy, and NetworkShutdownFailure.groovy. We will call the functionality of these classes "cluster failures." The cluster failures extend an abstract class called AbstractFailure.groovy, which is a runnable. Each of these runnable classes execute specific shell commands that purposely fail a cluster. When cluster failures are executed, they call a function populateCommandsLIst(), which will fill up the datastructures failCommands and restoreCommands with values pertaining to the cluster failure. The values include a shell string such as "sudo pkill -9 -f %s" and a specified host to run the command on. From this, shell commands are generated and executed. Note: the host can be specified when instantiating the cluster failure or configured in /resources/vars.properties
    • ServiceKilledFailure will execute commands that will kill a specified service.

        private static final String KILL_SERVICE_TEMPLATE = "sudo pkill -9 -f %s"
        private static final String START_SERVICE_TEMPLATE = "sudo service %s start"
    • ServiceRestartFailure will execute commands that will stop and start a service.

        private static final String STOP_SERVICE_TEMPLATE = "sudo service %s stop"
        private static final String START_SERVICE_TEMPLATE = "sudo service %s start"
    • NetworkShutdownFailure will execute a series of commands that restarts the network.

        private static final String DROP_INPUT_CONNECTIONS = "sudo iptables -A INPUT -s %s -j DROP"
        private static final String DROP_OUTPUT_CONNECTIONS = "sudo iptables -A OUTPUT -d %s -j DROP"
        private static final String RESTORE_INPUT_CONNECTIONS = "sudo iptables -D INPUT -s %s -j DROP"
        private static final String RESTORE_OUTPUT_CONNECTIONS = "sudo iptables -D OUTPUT -d %s -j DROP"
    Two other classes that must be mentioned are FailureVars.groovy and FailureExecutor.groovy. FailureVars, when instantiated, will load configurations from /resources/vars.properties to prepare for cluster failing. The configurations dictate which cluster failures will be executed along with a variety of different timing options. More information in "How to run cluster failure tests." FailureExecutor is the main driver that creates and runs cluster failure threads (threads run parallel to hadoop and mapreduce jobs). The sequence of execution are as follows:

    • FailureVars will configure all variables that are necessary for cluster failures.
    • FailureExecutor will then spawn and execute cluster failure threads.
    • The threads will then run its respective shell commands on hosts specified by the user.

  • How to run cluster failure tests:

    Since the cluster failures are all runnable, the user just has to instantiate the objects and execute them in the tests they are running. If the user wishes to run cluster failures in parallel to hadoop and mapreduce jobs to test for job completion, the user must utilize FailureVars and FailureExecutor. Let's say we want to run cluster failures test while a mapreduce test such as TestDFSIO is running:
    • First step is to create a FailureVars object before the test is run inside TestDFSIO.groovy.

        @Before
        void configureVars() {
          def failureVars = new FailureVars();
        }
    • Next step is to insert code to spawn and start a FailureExecutor thread inside the test body of TestDFSIO

        @Test
        public void testDFSIO() {    
      	FailureExecutor failureExecutor = new FailureExecutor();
          Thread failureThread = new Thread(failureExecutor, "DFSIO");
          failureThread.start();
      
      	//the test
      	...
      	...
        }
    • Now the user just has to execute the test. When the test is run, the cluster failures will run in parallel to the mapreduce test.
    • To configure the hosts as well as various timing options, open /resources/vars.properties. There, you can specify hosts, which cluster failures to run, and when the cluster failures start. You can also specify the time in between cluster failures and how long services can be killed before being brought back up. Refer to the /bigtop/bigtop-test-framework/README for more information on vars.properties.

Things to keep in mind

  • If you want to select a subset of tests you can use -Dorg.apache.maven-failsafe-plugin.testInclude='**/Mask*'
  • It is helpful to add -Dorg.apache.bigtop.itest.log4j.level=TRACE to your mvn verify command
  • No labels