You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 37 Next »

 Marvin  - our automation framework is a Python module that leverages the abilities of python and its multitude of libraries. Tests written using our framework use the unittest module under the hood. The unittest module is the Python version of the unit-testing framework originally developed by Kent Beck et al and will be familiar to the Java people in the form of JUnit. The following document will act as a tutorial introduction to those interested in testing CloudStack with python. This document does not cover the python language and we'll be pointing the reader instead to explore some tutorials that are more thorough on the topic. In the following we will be assuming basic python scripting knowledge from the reader. The reader is encouraged to walk through the steps after he/she has their environment setup and configured.

Environment

Developers

If you are a developer the cloudstack development environment is sufficient to get started

  1. Checkout the incubator-cloudstack project from incubator-cloudstack.git
  2. You will need Python - version 2.6 to install marvin but 2.7 to run the tests. Additional modules installed by Marvin - python-paramiko, mysql-connector-python, nose
  3. You should install Eclipse and the PyDev plugin. PyDev features auto-completion for python modules from within the Eclipse environment.
  4. On the master branch the 'developer' profile compiles, packages and installs Marvin.
    mvn -P developer -pl :cloud-marvin
    
  5. The mvn deploy goal will install marvin using pip. If not you may install it by hand by pip or easy_install
    pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
    
easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz

QA

If you are a QA engineer you won't need the entire codebase to build marvin.

  1. Jenkins@builds.a.o holds artifacts of the marvin builds and you can download it.
  2. The artifact (.tar.gz) is available after the build succeeds. Download it.
  3. On the client machine where you will be writing/running tests from setup the following:
    1. Install python 2.7 (http://www.python.org/download/releases/)
    2. Install setuptools. Follow the instructions for your client machine (windows/linux/mac)
    3. (for Windows only) Install pycrypto on Windows because windows does not bundle gcc to compile pycrypto. 
  4. The Marvin artifact you downloaded can now be installed. Any required python packages will be installed automatically
    easy_install tools/marvin/dist/Marvin-0.1.0.tar.gz
    
  5. To test if the installation was successful get into a python shell
    root@cloud:~/cloudstack-oss/tools/marvin/dist# python
    Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
    [GCC 4.5.2] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import marvin
    >>> from marvin.cloudstackAPI import *
    
    imports should happen without reporting errors.

First Steps

In our first steps we will build a simple API call and fire it against a CloudStack management server that is already deployed, configured and ready to accept API calls. You can pick any management server in your lab that has a few VMs running on it.Create a sample json config file telling us where your management server and database server are. Here's a sample:

JSON configuration
prasanna@cloud:~cloudstack-oss# cat demo/demo.cfg
{
    "dbSvr": {
        "dbSvr": "automation.lab.vmops.com",
        "passwd": "cloud",
        "db": "cloud",
        "port": 3306,
        "user": "cloud"
    },
    "logger": [
        {
            "name": "TestClient",
            "file": "/var/log/testclient.log"
        },
        {
            "name": "TestCase",
            "file": "/var/log/testcase.log"
        }
    ],
    "mgtSvr": [
        {
            "mgtSvrIp": "automation.lab.vmops.com",
            "port": 8096
        }
    ]
}
  • Note: dbSvr is the location where mysql server is running and passwd is the password for user cloud.
  • Run this command to open up the iptables on your management server iptables -I INPUT -p tcp --dport 8096 -j ACCEPT
  • Change the global setting integration.api.port on the CloudStack GUI to 8096 and restart the management server.
  1. Enter an interactive python shell and follow along with the steps listed below. We've used the ipython shell in our example because it has a very handy auto-complete feature
  2. We will import a few essential libraries to start with.
    • The cloudstackTestCase module contains the essential API calls we need and a reference to the API client itself. All tests will be children (subclasses) of the cloudstackTestCase since it contains the toolkit (attributes) to do our testing
      In [1]: import marvin
      In [2]: from marvin.cloudstackTestCase import *
      
    • The deployDataCenter module imported below will help us load our json configuration file we wrote down in the beginning so we can tell the test framework that we have our management server configured and ready
      In [2]: import marvin.deployDataCenter
      
  3. Let's load the configuration file using the deployDataCenter module
    In [3]: config = marvin.deployDataCenter.deployDataCenters('demo/demo.cfg')
    In [4]: config.loadCfg()
    
  4. Once the configuration is loaded successfully, all we'll need is an instance of the apiClient which will help fire our cloudstack APIs against the configured management server. In addition to the apiClient, the test framework also provides a dbClient to help us fire any SQL queries against the database for verification. So let's go ahead and get a reference to the apiClient:
    In [5]: apiClient = config.testClient.getApiClient()
    
  5. Now we'll start with forming a very simple API call. listConfigurations - which will show us the "global settings" that are set on our instance of CloudStack. The API command is instantiated as shown in the code snippet (as are other API commands).
    In [6]: listconfig = listConfigurations.listConfigurationsCmd()
    
    So the framework is intuitive in the verbs used for an API call. To deploy a VM you would be calling the deployVirtualMachineCmd method inside the deployVirtualMachine object. Simple, ain't it?
  6. Since it's a large list of global configurations let's limit ourselves to fetch only the configuration with the keyword 'expunge'. Let's change our listconfig object to take this attribute as follows:
    In [7]: listconfig.name = 'expunge'
    
  7. And finally - we fire the call using the apiClient as shown below:
    In [8]: listconfigresponse = apiClient.listConfigurations(listconfig)
    
    Lo' and Behold - the response you've awaited:
    In [9]: print listconfigresponse
    
    [ {category : u'Advanced', name : u'expunge.delay', value : u'60', description : u'Determines how long (in seconds) to wait before actually expunging destroyed vm. The default value = the default value of expunge.interval'},
      {category : u'Advanced', name : u'expunge.interval', value : u'60', description : u'The interval (in seconds) to wait before running the expunge thread.'},
      {category : u'Advanced', name : u'expunge.workers', value : u'3', description : u'Number of workers performing expunge '}]
    

The response is presented to us the way our UI receives it, as a JSON object. The object comprises of a list of configurations, each configuration showing the detailed dictionary (key, value) pairs of each config setting.

Putting it together

Listing stuff is all fine and dandy you might say - How do I launch VMs using python? And do I use the shell each time I have to do this? Well clearly not, we can have all the steps compressed into a python script. This example will show such a script which will:

  • create a testcase class
  • setUp a user account - name: user , passwd: password
  • deploy a VM into that user account using the default small service offering and CentOS template
  • verify that the VM we deployed reached the 'Running' state
  • tearDown the user account - basically delete it

Without much ado, here's the script:

#!/usr/bin/env python

import marvin
from marvin import cloudstackTestCase
from marvin.cloudstackTestCase import *

import unittest
import hashlib
import random

class TestDeployVm(cloudstackTestCase):
    """
    This test deploys a virtual machine into a user account
    using the small service offering and builtin template
    """
    def setUp(self):
        """
        CloudStack internally saves its passwords in md5 form and that is how we
        specify it in the API. Python's hashlib library helps us to quickly hash
        strings as follows
        """
        mdf = hashlib.md5()
        mdf.update('password')
        mdf_pass = mdf.hexdigest()

        self.apiClient = self.testClient.getApiClient() #Get ourselves an API client

        self.acct = createAccount.createAccountCmd() #The createAccount command
        self.acct.accounttype = 0                    #We need a regular user. admins have accounttype=1
        self.acct.firstname = 'bugs'
        self.acct.lastname = 'bunny'                 #What's up doc?
        self.acct.password = mdf_pass                #The md5 hashed password string
        self.acct.username = 'bugs'
        self.acct.email = 'bugs@rabbithole.com'
        self.acct.account = 'bugs'
        self.acct.domainid = 1                       #The default ROOT domain
        self.acctResponse = self.apiClient.createAccount(self.acct)
        # And upon successful creation we'll log a helpful message in our logs
        # using the default debug logger of the test framework
        self.debug("successfully created account: %s, user: %s, id: \
                   %s"%(self.acctResponse.account.account, \
                        self.acctResponse.account.username, \
                        self.acctResponse.account.id))

    def test_DeployVm(self):
        """
        Let's start by defining the attributes of our VM that we will be
        deploying on CloudStack. We will be assuming a single zone is available
        and is configured and all templates are Ready

        The hardcoded values are used only for brevity.
        """
        deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd()
        deployVmCmd.zoneid = 1
        deployVmCmd.account = self.acct.account
        deployVmCmd.domainid = self.acct.domainid
        deployVmCmd.templateid = 5                   #For default template- CentOS 5.6(64 bit)
        deployVmCmd.serviceofferingid = 1

        deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd)
        self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid))

        # At this point our VM is expected to be Running. Let's find out what
        # listVirtualMachines tells us about VMs in this account

        listVmCmd = listVirtualMachines.listVirtualMachinesCmd()
        listVmCmd.id = deployVmResponse.id
        listVmResponse = self.apiClient.listVirtualMachines(listVmCmd)

        self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \
                            returns a non-empty response")

        vm = listVmResponse[0]

        self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \
                         is the same as the one we deployed")


        self.assertEqual(vm.state, "Running", "Check if VM has reached \
                         a state of running")

    def tearDown(self):                               # Teardown will delete the Account as well as the VM once the VM reaches "Running" state
        """
        And finally let us cleanup the resources we created by deleting the
        account. All good unittests are atomic and rerunnable this way
        """
        deleteAcct = deleteAccount.deleteAccountCmd()
        deleteAcct.id = self.acctResponse.account.id
        self.apiClient.deleteAccount(deleteAcct)

To run the test we've written we'll place our class file into our demo directory. The test framework will "discover" the tests inside any directory it is pointed to and run the tests against the specified deployment. Our configuration file 'demo.cfg' is also in the same directory

The usage for deployAndRun is as follows:

option

purpose

-c

points to the configuration file defining our deployment

-r

test results log where the summary report is written to

-t

testcase log where all the logs we wrote in our tests is output for debugging purposes

-d

directory containing all the test suites

-l

only load the configuration, do not deploy the environment

-f

Run tests in the given file

On our shell environment we launch deployAndRun module as follows and at the end of the run the summary of test results is also shown.

root@cloud:~/cloudstack-oss# python -m marvin.deployAndRun -c demo/demo.cfg -t /tmp/testcase.log -r /tmp/results.log -f demo/TestDeployVm.py -l

root@cloud:~/cloudstack-oss# cat /tmp/results.log
test_DeployVm (testDeployVM.TestDeployVm) ... ok
----------------------------------------------------------------------
Ran 1 test in 100.511s
OK

Congratulations, your test has passed!

Advanced Example

We do not know for sure that the CentOS VM deployed earlier actually started up on the hypervisor host. The API tells us it did - so Cloudstack assumes the VM is up and running, but did the hypervisor successfully spin up the VM? In this example we will login to the CentOS VM that we deployed earlier using a simple ssh client that is exposed by the test framework. The example assumes that you have an Advanced Zone deployment of Cloudstack running. The test case is further simplified if you have a Basic Zone deployment. It is left as an exercise to the reader to refactor the following test to work for a basic zone.

Let's get started. We will take the earlier test as is and extend it by:

  • Creating a NAT (PortForwarding) rule that allows ssh (port 22) traffic
  • Open up the firewall to allow all SSH traffic to the account's VMs
  • Add the deployed VM returned in our previous test to this port forward rule
  • ssh to the NAT-ed IP using our ssh-client and get the hostname of the VM
  • Compare the hostname of the VM and the name of the VM deployed by CloudStack.
    Both should match for our test to be deemed : PASS

NOTE: This test has been written for the 3.0 CloudStack. On 2.2.y we do not explicitly create a firewall rule.

#!/usr/bin/env python

import marvin
from marvin import cloudstackTestCase
from marvin.cloudstackTestCase import *
from marvin.remoteSSHClient import remoteSSHClient

import unittest
import hashlib
import random
import string

class TestSshDeployVm(cloudstackTestCase):
    """
    This test deploys a virtual machine into a user account
    using the small service offering and builtin template
    """
    @classmethod
    def setUpClass(cls):
        """
        CloudStack internally saves its passwords in md5 form and that is how we
        specify it in the API. Python's hashlib library helps us to quickly hash
        strings as follows
        """
        mdf = hashlib.md5()
        mdf.update('password')
        mdf_pass = mdf.hexdigest()
        acctName = 'bugs-'+''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(6)) #randomly generated account

        cls.apiClient = super(TestSshDeployVm, cls).getClsTestClient().getApiClient()
        cls.acct = createAccount.createAccountCmd() #The createAccount command
        cls.acct.accounttype = 0                    #We need a regular user. admins have accounttype=1
        cls.acct.firstname = 'bugs'
        cls.acct.lastname = 'bunny'                 #What's up doc?
        cls.acct.password = mdf_pass                #The md5 hashed password string
        cls.acct.username = acctName
        cls.acct.email = 'bugs@rabbithole.com'
        cls.acct.account = acctName
        cls.acct.domainid = 1                       #The default ROOT domain
        cls.acctResponse = cls.apiClient.createAccount(cls.acct)

    def setUpNAT(self, virtualmachineid):
        listSourceNat = listPublicIpAddresses.listPublicIpAddressesCmd()
        listSourceNat.account = self.acct.account
        listSourceNat.domainid = self.acct.domainid
        listSourceNat.issourcenat = True

        listsnatresponse = self.apiClient.listPublicIpAddresses(listSourceNat)
        self.assertNotEqual(len(listsnatresponse), 0, "Found a source NAT for the acct %s"%self.acct.account)

        snatid = listsnatresponse[0].id
        snatip = listsnatresponse[0].ipaddress

        try:
            createFwRule = createFirewallRule.createFirewallRuleCmd()
            createFwRule.cidrlist = "0.0.0.0/0"
            createFwRule.startport = 22
            createFwRule.endport = 22
            createFwRule.ipaddressid = snatid
            createFwRule.protocol = "tcp"
            createfwresponse = self.apiClient.createFirewallRule(createFwRule)

            createPfRule = createPortForwardingRule.createPortForwardingRuleCmd()
            createPfRule.privateport = 22
            createPfRule.publicport = 22
            createPfRule.virtualmachineid = virtualmachineid
            createPfRule.ipaddressid = snatid
            createPfRule.protocol = "tcp"

            createpfresponse = self.apiClient.createPortForwardingRule(createPfRule)
        except e:
            self.debug("Failed to create PF rule in account %s due to %s"%(self.acct.account, e))
            raise e
        finally:
            return snatip

    def test_SshDeployVm(self):
        """
        Let's start by defining the attributes of our VM that we will be
        deploying on CloudStack. We will be assuming a single zone is available
        and is configured and all templates are Ready

        The hardcoded values are used only for brevity.
        """
        deployVmCmd = deployVirtualMachine.deployVirtualMachineCmd()
        deployVmCmd.zoneid = 1
        deployVmCmd.account = self.acct.account
        deployVmCmd.domainid = self.acct.domainid
        deployVmCmd.templateid = 5 #CentOS 5.6 builtin
        deployVmCmd.serviceofferingid = 1

        deployVmResponse = self.apiClient.deployVirtualMachine(deployVmCmd)
        self.debug("VM %s was deployed in the job %s"%(deployVmResponse.id, deployVmResponse.jobid))

        # At this point our VM is expected to be Running. Let's find out what
        # listVirtualMachines tells us about VMs in this account

        listVmCmd = listVirtualMachines.listVirtualMachinesCmd()
        listVmCmd.id = deployVmResponse.id
        listVmResponse = self.apiClient.listVirtualMachines(listVmCmd)

        self.assertNotEqual(len(listVmResponse), 0, "Check if the list API \
                            returns a non-empty response")

        vm = listVmResponse[0]
        hostname = vm.name
        nattedip = self.setUpNAT(vm.id)

        self.assertEqual(vm.id, deployVmResponse.id, "Check if the VM returned \
                         is the same as the one we deployed")


        self.assertEqual(vm.state, "Running", "Check if VM has reached \
                         a state of running")

        # SSH login and compare hostname
        ssh_client = remoteSSHClient(nattedip, 22, "root", "password")
        stdout = ssh_client.execute("hostname")

        self.assertEqual(hostname, stdout[0], "cloudstack VM name and hostname match")


    @classmethod
    def tearDownClass(cls):
        """
        And finally let us cleanup the resources we created by deleting the
        account. All good unittests are atomic and rerunnable this way
        """
        deleteAcct = deleteAccount.deleteAccountCmd()
        deleteAcct.id = cls.acctResponse.account.id
        cls.apiClient.deleteAccount(deleteAcct)

Observe that unlike the previous test class - TestDeployVM - we do not have methods setUp and tearDown. Instead, we have the methods setUpClass and tearDownClass. We do not want the initialization (and cleanup) code in setup (and teardown) to run after every test in the suite which is what setUp and tearDown will do. Instead we will have the initialization code (creation of account etc) done once for the entire lifetime of the class. This is accomplished using the setUpClass and tearDownClass classmethods. Since the API client is only visible to instances of cloudstackTestCase we expose the API client at the class level using the getClsTestClient() method. So to get the API client we call the parent class (super(TestSshDeployVm, cls)) ie cloudstackTestCase and ask for a class level API client.

Test Pattern

An astute reader would by now have found that the following pattern has been used in the tutorial's test examples:

  • creation of an account
  • deploying Vms, running some unittest code
  • deletion of the account

This pattern is useful to contain the entire test into one atomic piece. It helps prevent tests from becoming entangled in each other ie we have failures localized to one account and that should not affect the other tests. Advanced examples in our basic verification suite are written using this pattern. Test engineers are encouraged to follow the same unless there is good reason not to do so.

User Tests

The test framework by default runs all its tests under 'admin' mode which means you have admin access and visibility to resources in cloudstack. In order to run the tests as a regular user/domain-admin - you can apply the @UserName decorator which takes the arguments (account, domain, accounttype) at the head of your test class. The decorator will create the account and domain if they do not exist. Do NOT apply the decorator to a test method.

An example can be found at: cloudstack-oss/tools/testClient/testcase/test_userDecorator.py

Debugging & Logging

using the pydev plugin/ pdb and the testClient logs

The logs from the test client detailing the requests sent by it and the responses fetched back from the management server can be found under /var/log/testclient.log. By default all logging is in INFO mode. In addition, you may provide your own set of DEBUG log messages in tests you write. Each cloudstackTestCase inherits the debug logger and can be used to output useful messages that can help troubleshooting the testcase when it is running. These logs will be found in the location you specified by the -t option when launching the tests.

eg:

list_zones_response = self.apiclient.listZones(listzonesample)
self.debug("Number of zones: %s" % len(list_zones_response)) #This shows us how many zones were found in the deployment

The result log specified by the -r option will show the detailed summary of the entire run of all the suites. It will show you how many tests failed, passed and how many had errors in them.

While debugging with the PyDev plugin you can also place breakpoints in Eclipse for a more interactive debugging session.

Deployment Configuration

Marvin can be used to configure a deployed Cloudstack installation with Zones, Pods and Hosts automatically in to Advanced or Basic network types. This is done by describing the required deployment in a hierarchical json configuration file. But writing and maintaining such a configuration is cumbersome and error prone. Marvin's configGenerator is designed for this purpose. A simple hand written python description passed to the configGenerator will generate the compact json configuration of our deployment.

Examples of how to write the configuration for various zone models is within the configGenerator.py module in your marvin source directory. Look for methods describe_setup_in_advanced_mode()/ describe_setup_in_basic_mode().

What does it look like?

Below is such an example describing a simple one host deployment:

{
    "zones": [
        {
            "name": "Sandbox-XenServer",
            "guestcidraddress": "10.1.1.0/24",
            "physical_networks": [
                {
                    "broadcastdomainrange": "Zone",
                    "name": "test-network",
                    "traffictypes": [
                        {
                            "typ": "Guest"
                        },
                        {
                            "typ": "Management"
                        },
                        {
                            "typ": "Public"
                        }
                    ],
                    "providers": [
                        {
                            "broadcastdomainrange": "ZONE",
                            "name": "VirtualRouter"
                        }
                    ]
                }
            ],
            "dns1": "10.147.28.6",
            "ipranges": [
                {
                    "startip": "10.147.31.150",
                    "endip": "10.147.31.159",
                    "netmask": "255.255.255.0",
                    "vlan": "31",
                    "gateway": "10.147.31.1"
                }
            ],
            "networktype": "Advanced",
            "pods": [
                {
                    "endip": "10.147.29.159",
                    "name": "POD0",
                    "startip": "10.147.29.150",
                    "netmask": "255.255.255.0",
                    "clusters": [
                        {
                            "clustername": "C0",
                            "hypervisor": "XenServer",
                            "hosts": [
                                {
                                    "username": "root",
                                    "url": "http://10.147.29.58",
                                    "password": "password"
                                }
                            ],
                            "clustertype": "CloudManaged",
                            "primaryStorages": [
                                {
                                    "url": "nfs://10.147.28.6:/export/home/sandbox/primary",
                                    "name": "PS0"
                                }
                            ]
                        }
                    ],
                    "gateway": "10.147.29.1"
                }
            ],
            "internaldns1": "10.147.28.6",
            "secondaryStorages": [
                {
                    "url": "nfs://10.147.28.6:/export/home/sandbox/secondary"
                }
            ]
        }
    ],
    "dbSvr": {
        "dbSvr": "10.147.29.111",
        "passwd": "cloud",
        "db": "cloud",
        "port": 3306,
        "user": "cloud"
    },
    "logger": [
        {
            "name": "TestClient",
            "file": "/var/log/testclient.log"
        },
        {
            "name": "TestCase",
            "file": "/var/log/testcase.log"
        }
    ],
    "globalConfig": [
        {
            "name": "storage.cleanup.interval",
            "value": "300"
        },
        {
            "name": "account.cleanup.interval",
            "value": "600"
        }
    ],
    "mgtSvr": [
        {
            "mgtSvrIp": "10.147.29.111",
            "port": 8096
        }
    ]
}

What you saw earlier was a condensed form of this complete configuration file. If you're familiar with the CloudStack installation you will recognize that most of these are settings you give in the install wizards as part of configuration. What is different from the simplified configuration file are the sections "zones" and "globalConfig". The globalConfig section is nothing but a simple listing of (key, value) pairs for the "Global Settings" section of CloudStack.

The "zones" section defines the hierarchy of our cloud. At the top-level are the availability zones. Each zone has its set of pods, secondary storages, providers and network related configuration. Every pod has a bunch of clusters and every cluster a set of hosts and their associated primary storage pools. These configurations are easy to maintain and deploy by just passing them through marvin.

root@cloud:~/cloudstack-oss# python -m marvin.deployAndRun -c advanced_zone.cfg -t /tmp/t.log -r /tmp/r.log -d tests/

Notice that we didn't pass the -l option to deployAndRun. The reason being we don't want to just load the configuration but also deploy the configuration. This is the default behaviour of Marvin wherein the cloud configuration is deployed and the tests in the directory "tests/" are run against it.

How do I generate it?

The above one host configuration was described as follows:

#!/usr/bin/env python

import random
import marvin
from marvin.configGenerator import *

def describeResources():
    zs = cloudstackConfiguration()

    z = zone()
    z.dns1 = '10.147.28.6'
    z.internaldns1 = '10.147.28.6'
    z.name = 'Sandbox-XenServer'
    z.networktype = 'Advanced'
    z.guestcidraddress = '10.1.1.0/24'

    pn = physical_network()
    pn.name = "test-network"
    pn.traffictypes = [traffictype("Guest"), traffictype("Management"), traffictype("Public")]
    z.physical_networks.append(pn)

    p = pod()
    p.name = 'POD0'
    p.gateway = '10.147.29.1'
    p.startip =  '10.147.29.150'
    p.endip =  '10.147.29.159'
    p.netmask = '255.255.255.0'

    v = iprange()
    v.gateway = '10.147.31.1'
    v.startip = '10.147.31.150'
    v.endip = '10.147.31.159'
    v.netmask = '255.255.255.0'
    v.vlan = '31'
    z.ipranges.append(v)

    c = cluster()
    c.clustername = 'C0'
    c.hypervisor = 'XenServer'
    c.clustertype = 'CloudManaged'

    h = host()
    h.username = 'root'
    h.password = 'password'
    h.url = 'http://10.147.29.58'
    c.hosts.append(h)

    ps = primaryStorage()
    ps.name = 'PS0'
    ps.url = 'nfs://10.147.28.6:/export/home/sandbox/primary'
    c.primaryStorages.append(ps)

    p.clusters.append(c)
    z.pods.append(p)

    secondary = secondaryStorage()
    secondary.url = 'nfs://10.147.28.6:/export/home/sandbox/secondary'
    z.secondaryStorages.append(secondary)

    '''Add zone'''
    zs.zones.append(z)

    '''Add mgt server'''
    mgt = managementServer()
    mgt.mgtSvrIp = '10.147.29.111'
    zs.mgtSvr.append(mgt)

    '''Add a database'''
    db = dbServer()
    db.dbSvr = '10.147.29.111'
    db.user = 'cloud'
    db.passwd = 'cloud'
    zs.dbSvr = db

    '''Add some configuration'''
    [zs.globalConfig.append(cfg) for cfg in getGlobalSettings()]

    ''''add loggers'''
    testClientLogger = logger()
    testClientLogger.name = 'TestClient'
    testClientLogger.file = '/var/log/testclient.log'

    testCaseLogger = logger()
    testCaseLogger.name = 'TestCase'
    testCaseLogger.file = '/var/log/testcase.log'

    zs.logger.append(testClientLogger)
    zs.logger.append(testCaseLogger)
    return zs

def getGlobalSettings():
   globals = { "storage.cleanup.interval" : "300",
               "account.cleanup.interval" : "60",
            }

   for k, v in globals.iteritems():
        cfg = configuration()
        cfg.name = k
        cfg.value = v
        yield cfg

if __name__ == '__main__':
    config = describeResources()
    generate_setup_config(config, 'advanced_cloud.cfg')

The zone(), pod(), cluster(), host() are plain objects that carry just attributes. For instance a zone consists of the attributes - name, dns entries, network type etc. Within a zone I create pod()s and append them to my zone object, further down creating cluster()s in those pods and appending them to the pod and within the clusters finally my host()s that get appended to my cluster object. Once I have defined all that is necessary to create my cloud I pass on the described configuration to the generate_setup_config() method which gives me my resultant configuration in JSON format.

Deploying the configuration

You can then run your json configuration through mvn by:

mvn -Pdeveloper,marvin -pl :cloud-marvin -Dmarvin.config=/path/to/config

This will deploy your cloud as given in the configuration file. Provided you have marvin installed on your machine from where you are running the above command and it can reach the required infrastructure

Sandbox Scripts

You don't always want to describe one hosts configurations in python files so we've included some common examples in the Marvin tarball under the sandbox directory. In the sandbox are configurations of a single host advanced and a single host basic zone that can be tailored to your environment using a simple properties file. The property file, setup.properties is contains editable name, value (name=value) pairs that you can change to the IPs, hostnames etc that you have in your environment. The properties file when passed to the python script will generate the JSON configuration for you.

Sample setup.properties:

[globals]
secstorage.allowed.internal.sites=10.147.28.0/24

[environment]
dns=10.147.28.6
mshost=localhost
mysql.host=localhost
mysql.cloud.user=cloud
mysql.cloud.passwd=cloud

[cloudstack]
private.gateway=10.147.29.1
private.pod.startip=10.147.29.150
private.pod.endip=10.147.29.159

And generate the JSON config as follows:

root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# python advanced_env.py -i setup.properties -o advanced.cfg
root@cloud:~/incubator-cloudstack/tools/marvin/marvin/sandbox/advanced# head -10 advanced.cfg
{
    "zones": [
        {
            "name": "Sandbox-XenServer",
            "guestcidraddress": "10.1.1.0/24",

... <snip/> ...

Marvin Nose Plugin

Nose extends unittest to make testing easier. Nose comes with plugins that help integrating your regular unittests into external build systems, coverage, profiling etc. Marvin comes with its own nose plugin for this so you can use nose to drive CloudStack tests. The plugin is installed on installing marvin. Running nosetests -p will show if the plugin registered successfully.

$ nosetests -p
Plugin xunit
Plugin multiprocess
Plugin capture
Plugin logcapture
Plugin coverage
Plugin attributeselector
Plugin doctest
Plugin profile
Plugin collect-only
Plugin isolation
Plugin pdb
Plugin marvin


# Usage and running tests
$ nosetests --with-marvin --marvin-config=/path/to/basic_zone.cfg --load /path/to/tests

The smoke tests and component tests contain attributes that can be used to filter the tests that you would like to run against your deployment. You would use nose's attrib plugin for this. Currently zone models are

  • advanced - Typical Advanced Zone
  • basic - a basic zone without security groups
  • sg - a basic zone with security groups
  • eip - an elastic ip basic zone
  • advancedns - advanced zone with a netscaler device
  • devcloud - tests that will run only for the basic zone on a devcloud setup done using tools/devcloud/devcloud.cfg
  • speed = 0/1/2 (greater the value lesser the speed)
  • multihost/multipods/mulitcluster (test requires multiple set of hosts/pods/clusters)

Running Devcloud Tests

Some tests have been tagged to run only for devcloud environment. In order to run these tests you can use the following command after you've setup your management server and the host only devcloud is running with devcloud.cfg as its deployment configuration. This assumes you have the marvin-nose plugin installed on it as listed above.

~/workspace/cloudstack/incubator-cloudstack(branch:master*) » nosetests --with-marvin --marvin-config=tools/devcloud/devcloud.cfg --load -a tags='devcloud' test/integration/smoke

Test Deploy Virtual Machine ... ok
Test Stop Virtual Machine ... ok
Test Start Virtual Machine ... ok
Test Reboot Virtual Machine ... ok
Test destroy Virtual Machine ... ok
Test recover Virtual Machine ... ok
Test destroy(expunge) Virtual Machine ... ok

----------------------------------------------------------------------

Ran 7 tests in 0.001s

OK

Guidelines to choose scenarios for integration

There are a few do's and don'ts in choosing the automated scenario for an integration test. These are mostly for the system to blend well with the continuous test infrastructure and to keep environments pure and clean without affecting other tests.

Scenario

  • Every test should happen within a CloudStack test account. The order in which you choose the type of account to test within should be:

                                   User > DomainAdmin > Admin

           At the end of the test we delete this account so as to keep tests atomic and contained within a tenant's users space.

  • All tests must be written with the perspective of the API. UI directions are often confusing and using the rich API often reveals further test scenarios. You can capture the API arguments using cloudmonkey/firebug.
  • Tests should be generic enough to run in any environment/lab - under any hypervisor.** If this is not possible then it is appropriate to mark the test with an @attr attribute to signify any specifics. eg: @attr(hypervisor='vmware') for runs only on vmware
  • Every resource should be creatable in the test from scratch.** referring to an Ubuntu template is probably not a good idea. Your test must show how to fetch this template or give a static location from where the test can fetch it.
  • Do not change global settings configurations in between a test. Make two separate tests for this. All tests run against one given deployment and altering the settings midway is not effective

Backend Verification with paramiko/and other means

  • Verifying status of resources within hypervisors is fine. Most hypervisors provide standard SSH server access
  • Your tests should include the complete command and its expected output. ** e.g. iptables -L INPUT -# to list the INPUT chain of iptables
  • If you execute multiple commands then all of them should be chained together on one line** e.g: service iptables stop; service iptables start #to stop and start iptables
  • Your script must execute over ssh because this is how Marvin will execute it** ssh <target-backend-machine> "<your script>" #should return the output of your script** move the credential specific information into the deployment config file and/or use a standard credential
  • Most external devices like F5/ NetScaler have ssh open to execute commands. But you must include the command and its expected output in the test as not everyone is aware of the device's CLI
  • If you are using a UI like vCenter to verify something most likely you cannot automate what you see there because there are no ASLv2 licensed libraries for vmware/esx as of today.

Python Resources

  1. The single largest python resource is the python website itself - http://www.python.org
  2. Mark Pilgrim's - "Dive Into Python" - is another great resource. The book is available for free online - http://www.diveintopython.net. Chapter 1- 6 cover a good portion of language basics and Chapter 13 & 14 are essential for anyone doing test script development
  3. To read more about the assert methods the language reference is the ideal place - http://docs.python.org/library/unittest.html.

More Examples

Examples of tests with more backend verification and complete integration of suites for network, snapshots, templates etc can be found in the test/integration/smoke directory. Almost all of these test suites use common library wrappers written around the test framework to simplify writing tests. These libraries are part of marvin.integration. You may start using these libraries at your convenience but there's no better way than to write the complete API call yourself to understand its behaviour.

The libraries take advantage of the fact that every resource - VirtualMachine, ISO, Template, PublicIp etc follows the pattern of

  • create - where we cause creation of the resource eg: deployVirtualMachine
  • delete - where we delete our resource eg: deleteVolume
  • list - where we look for some state of the resource eg: listPods

Acknowledgements

  • The original author of the testing framework - Edison Su
  • Maintenance and bug fixes - Prasanna Santhanam
  • Documentation - Prasanna and Edison

For any feedback, typo corrections please email the -dev lists

  • No labels