Currently all releases of  Cloud-platform are based out of ACS master, so it is imperative to monitor and improve the health of ACS master. 

For this we need to have infrastructure and automated way of testing ACS master in a continuos fashion. This can be achieved by building a 

 continous integration system capable which is capable of running tests and creating test beds on demand. 


The purpose of the CIS is to provide a means to test and monitor the the health of ASC master. 


JIRA Tickets


  • CI : Continuos integration.

  • CS: CloudStack

  • CIS: Continuos Integration system.

  • MTF: Marvin test framework.

  • MCF: Marvin config file.

  • MS: Management server.

Use cases

The CIS is required to  

  • Create a fully working test bed, run tests and publish the results.

  • Provide a means to access test logs for debugging purposes.

  • Dynamically pull newly added test cases into the CI runs.

  • Provide a mens to separate tests that can be run using CS simulator. 

  • Proved self service to create on demand test setups and run tests.

Functional requirements

We have limited the scope of the CI to address the immediate needs. Once we are able to reach these goals, we will add means to provide self service and shift to dynamic resource allocation.

  1. CIS will create a test setup comprising of single management server supporting one hypervisor at a time. It will be a development environment. Currently we will not support clustered management server setups.
  2. On successful creation of MS VM it will add it to the database and try to reuse it on subsequent test runs. The reuse of MS VM means we will not create new VM and install dev tools every time, we will always tear down the old CS setup and create a new one with a new build form the latest code base. 
  3. CIS will work based on a static configuration files added to its database, the configuration file will be in accordance with the CS (marvin test framework) MTF. It may contain additional details that are not required by MTF but are used by CIS system.
  4. As of now we do not pick the resources dynamically and generate the MCF. It has to be created manually and  added to the CIS database. There is no validation of the MCF agains other MCFs that may be present in the CIS database. we have to make sure that we do not add overlapping resources in MCF files.
  5. CIS will use only open source tools to function. Currently we will use Jenkins, Cobbler and Puppet to create CS test beds.
  6. It will run tests and publish the results in mail and archive the test results and logs on a nfs server.


  1. CIS is meant only to run CI to stabilise ACS master, It is not meant to offer testing services to developers for now. 

  2. CIS test resources and hardware will not be used by any other system or person.

Work Flow

Driver VM Setup

  1. The CIS database is a mysql database located on the driver VM. We need to have access to the driver VM and the database to configure CIS.
  2. Create a MCF and add its path to the static_config table of the CIS database.
  3. Add the details (IPMI address , MAC etc) of hosts mentioned in the above MCF file to the static_host_config table of CIS database.
  4. Add the location and version of system vm templates required to build the test bed described in the above mentioned MCF. The version of system vm templates is just the name of the brach from which we are building the CS MS.
  5. After each run the CIS system make the detailed test report available via jenkins reporter job. These jobs will stay in the Jenkins server indefinitely unless removed. The CIS can be configured to cleanup these jobs at specified interval. We need to set the self.deleteAfterDays value in the file need to add this as cron job and run it regularly to cleanup the report generator jobs in jenkins.

Jenkins Setup

  1. Log into Jenkins, create a  Automation CI view and add all the CIS related jobs to this view. The CIS related jobs can be crated from the xml file located in the CIS source code.
  2. Configure the Automation CI trigger job to schedule the CI runs. This job will trigger the CI run jobs on each of the hypervisor.
  3. There is a job associated to run the CI on each type of hypervisor. Add the default values of build parameters and save the config.The default parameters will be  used to for the daily automated CI runs.
  4. Configure the test executor job by specifiying the number of test that can be run in parallel.
  5. Once all the jobs are in place, Enable these jobs. CIS system should now be completely operational.  





Component Description

  • Jenkins: This is an open source  application that schedules and monitors jobs.

  • Driver VM: This is the core component of the CI system. This VM handles creation of management server, refreshing the hosts, deploying data center, keeping track of the 

           resources and execution of tests.

  • Infra xen server cluster: These are set of xen servers which are a part of the CI system and are used for hosting management servers and running other required service like a nfs servers , secondary DNS servers etc.

  • Server farm: These are the set of machines used as hypervisors which are used to create cloudstack datacenter.


  • There are other components of the CIS like a proxy server, a open filer VM which are not shown in the above diagram.

Design description

  • The CIS system is comprised of two main parts namely the driver VM and the jenkins VM. The driver executes and the jenkins VM triggers the jobs. When a job is triggered in jenkins, it runs a script attached with the job. this script executes on the driver VM and kicks of the test bed setup. Once the test bed setup is complete the Driver VM triggers the TestExecutor job to executes tests, once the test execution is complete the write complete job will notify the driver VM to create a report generator job which completes on test run. 
  • The CIS dose work using four jenkins jobs, Automation-CI-trigger the trigger job ,automation4.4-CI-<hypervisorType>  testbed creation job , TestExecutor the test execution job and the report_generator_xxx reporting job which is generated dynamically for every test run. The xxx mentioned the the name will be substituted by the build number and the zone name against which the test was run. For every automation run these jobs are run in sequence. 
  • The test bed creation job parameterized jenkins job which internally calls the testbed creation job to initiate the test run. we can configure the automation trigger job to run the test bed creation job periodically.
  • The Driver VM is added as a slave in jenkins. All the jenkins jobs are run on this VM.
  • The TestExecutor job is a Jenkins matrix job which uses nosetest commands and executes tests based on the arguments passed to it by the driver VM. The test Execution job is triggered by the driver VM when the test bed creation is complete. The TestExecutor job contains all the test that need to be run on a given test bed.
  • The report generator job is again jenkins job which uses junit plugin and a custom jelly script to generate the result of a particular test run.
  • All the test results are archived on to a nfs server for later analysis, The archiving is done by a script added to the report generator job.
  • We treat a set of resources (hosts, ips, vlans) specified by a configuration file (MCF) a one unit of resource instead of treating each ip or vlan as a resource. This reduces the complexity in resource allocation and resource management.
  • we use the jenkins concurrent job throttling plugin to enable queuing in cases where jobs are scheduled but no resource is free.  
  • We use cobbler to pxe boot machines and puppet to install packages and configure the VMs.
  • The cloudstack management server is installed from source for every test run instead of installing it from previously built packages. We can track each of the test run based on the commit hash from which the management server was built.  
  • There can be multiple test runs running concurrently, in order to isolate the test execution environments we use python virtual environments. This gives us a separate execution environment for each test run.
  • Each version of cloudstack is associated with its own version of test cases and test framework version. we fetch the test cases and the marvin test framework packages from the management server once it is built and then install them in the corresponding virtual environments.
  • The links to the systemvm templates corresponding to each of the versions is maintained in db. For every run we fetch the system vm templates from these urls and seed them in the corresponding secondary storages. The path to the secondary storage is read from the configuration file (MCF).
  • In order to reduce the time to get the built in templates we pre-seed them just like the system vm templates.   
  • In case of hypervisors like KVM we need to install cloudstack agents. we generate the agent packages as a part of the management server build and push them to the required host, we then use puppet recipe to configure the repos and install the agent from the packages which we copied earlier.

  • In order to reduce the setup time and execution time the tests are categorised into simulator tests and hardware specific tests. We are required to run only the simulator specific tests in case of simulator test runs. we use the information from the configuration files and the use appropriate tags to fetch the corrected test cases that need to be executed.
  • For cloudstack datacenter creation we rely on MTF’s deployDataCenter script. 
  • The test cases require some data to run, like the location to fetch the templates from, access to certain storage services like iscsi etc. These kind of things are all maintained in the file. The data in this file is environment specific. We have to edit this file based on the test environment.



  Jenkins, puppet, cobbler, marvin, python virtual environment.

Resources required.

Each test run needs at least

  • A xenserver to host the management server,driverVM, NFs storage etc.
  • Three IPMI enabled servers  which will be used as hosts in cloudstack setup.
  • couple of vlans (in case of advanced network) and ip ranges.


CI Design Doc

Cloudstack - Continuous Integration

  • No labels