This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • Test Coding and Review Guidelines
Skip to end of metadata
Go to start of metadata

Readability:

Below notes are documented while doing the review for below issue. Please follow these guidelines to raise an issue while reviewing Marvin code or any Test Features code under CS.

https://reviews.apache.org/r/14058/

1. Please add module level doc strings. Add some explanation through these doc strings as what a particular test module covers if possible give specific examples. Please raise an issue while doing review, if proper code comments\doc strings are not followed. Adding proper comments,docstrings makes it easy to generate and use documentation for the code using known document generation tools.
'''
@Dates: Add Dates viz., module creation date, modified date
@version: Add version variable
@Author: Add Author Variables. Mention NA if not to be used
@CopyRight: proper copy right strings. Mention NA if not to be used
@Dependencies: Dependencies etc.
@Desc:
@Example:
'''

2. Add Doc Strings to each test\setup\teardown as steps\workflow. EX: Under “setUpClass” class method, add doc strings as below ( generic example ). Currently, class level setup does lot of operations, it’s confusing to follow them without a workflow as below.
EX: ‘’’
Step 1: Creating DB object.
‘’’
‘’’
Step2 : Creating ssh object.
‘’’
‘’’
Step3 : Create work flow etc. Add these steps as part of doc strings for each step.
‘’’
The above will enhance readability, maintainability as well.

3. Separate Data from test code. Remove Services class from test module altogether. Currently, code and data are mixed together. This code is repetitive under available each test modules. Instead, separate it in a config file. Use self.api_client to retrieve this configuration information. Pass this config file path to getClsTestClient call to get a config object dynamically. If needed change the lib\api for getClsTestClient. This removes lot of repetitive code across various test modules.

4. Initialization: Initialize all class and instance variables beforehand to None\required values. Provide adequate doc strings as what each variable signifies to. Currently, if we see lot of places where class and instance variables are used before properly defining them, though we are using dynamic language
initialize to None and use them later to check if None or not.
EX: cls.domain, cls.zone, cls.vpc_off cls.net_off etc. These variables, if used later to check the creation of objects, will be used for further checking.

Similarly for all instance variables. This is a good programming practice.

5. Why do we need to change the value as part of setup as below code per say:
cls.services"vpc""cidr" = '10.1.1.1/16'. Can't we any ways keep this as part of config file as mentioned in Step2 above? What is need for assigning this value dynamically here with a static value?

6. Lot of places, there are create calls, EX: See below, it has create calls separately for both "Ingress" and "Egress", Can we generalize them in to a simple API call? EX: Create( Entitytype,params ). Pass all your parameters to it and say EntityType as "NetworkACL", “PublicIpaddress” etc. “Params’’ signifies the necessary parameters for that entity creation. With this, we avoid repetitive code and make it easier for maintenance. I believe, we don’t need separate calls for creation of PublicIpaddres, networkacl etc . Use a simple "factory" design to create a given type. Make it as a library and abstract to the feature code. This will not add lot of code to feature test.
cls.nwacl_internet_1 = NetworkACL.create(
338
cls.nwacl_internet_1 = NetworkACL.create(
339
cls.api_client,
339
cls.api_client,
340
networkid=cls.network_1.id,
340
networkid=cls.network_1.id,
341
services=cls.services"icmp_rule",
341
services=cls.services"icmp_rule",
342
traffictype='Egress'
342
traffictype='Egress'
343
)

7. Though there are lot of create entities happening EX: NetworkACL,Serviceoffering,ip etc, but clean-up is having only few entities, any thought?

8. If we see the below code for EX, there are two asserts, one for whether the type is list , then length and parameter check. why can’t we club this validation in to one single call? check whether the type is list and verify whether the entry we are verifying is there or not. Abstract to feature code as library call.
With this, the code looks two to three lines instead of multiple lines of assertions, repetitive code. Generally, API usage is best if we have repetitive tasks like this. This will enhance maintainability. Add to library call, to take Validate( entitytype, params ), Use entitytype as list, and params to verify. Lot of places, these assertions are available currently and seems repetitive, this will just bloat the code unnecessarily, that could have been avoided. I believe the assertion check for whether type is list, then its length and returning first argument is too repetitive, it can be better fine tuned. what say?

vpc_offs = VpcOffering.list(
379
vpc_offs = VpcOffering.list(
380
self.apiclient,
380
self.apiclient,
381
id=vpc_offering.id
381
id=vpc_offering.id
382
)
382
)
383
self.assertEqual(
383
self.assertEqual(
384
isinstance(vpc_offs, list),
384
isinstance(vpc_offs, list),
385
True,
385
True,
386
"List VPC offerings should return a valid list"
386
"List VPC offerings should return a valid list"
387
)
387
)
388
self.assertEqual(
388
self.assertEqual(
389
vpc_offering.name,
389
vpc_offering.name,
390
vpc_offs0.name,
390
vpc_offs0.name,
391
"Name of the VPC offering should match with listVPCOff data"
391
"Name of the VPC offering should match with listVPCOff data"
392
)
392
)
393

9. Please use uniform return codes\status codes\global constants across all test features. Maintain these codes as part of common library or utils. See the below status codes example.
‘’’
@Desc: Status Codes to be used across various test features.
‘’’
Class CSStatusCodes:
SUCCESS = 1
FAILED = 0
INGRESS = X
EGRESS = Y
NETWORKACL = Z

( or ) keep all the codes as part of _builtins_ module, this way, we can directly use those codes in other features.

Use the above status codes in the feature code for better readability EX:

from _builtins_ import *
obj = Create_ssh( parms )
If obj is not None :
If obj.execute() != SUCCESS:
Return FAILED

10. Please verify the output of method call\object creation etc. Check the below code for EX, we are not verifying the ssh_2 handle, whether it is None or other return type or client call is successful, We straight forward went to execute call, what ssh_2 object is None or not successful? if it is not returning None from library call, change the library call accordingly for the possible failure in the library or raise an issue for library change?

AS a new user, how do we know ssh_2 is successful or not.

ssh_2 = self.vm_1.get_ssh_client(
457
ssh_2 = self.vm_1.get_ssh_client(
458
ipaddress=self.public_ip_2.ipaddress.ipaddress,
458
ipaddress=self.public_ip_2.ipaddress.ipaddress,
459
reconnect=True)
459
reconnect=True)
460
self.debug("SSH into VM is successfully")

11. Check the below code, we didn’t see any code to check whether "execute" went successful or not? directly we went to say str(res). res = ssh_2.execute("ping -c 1 www.google.com")

12. In relation to log message in the same code above, logging was little haphazard, its better to follow as below log for a given test run. Use a decorator for logging if possible. How and where user can set error log ? Separate debug logs from error logs, if library enhancement needs to be done, raise an issue so that concerned person or whoever responsible can add some enhancements. Logging after a sample test run should be more readable like below.

:::::::::::::::Test module "A" started:::::::::::

::::::::Testcase1 Started::::::.
Test Case flow ..Step1,step2,step3 etc.
Test case result...PASS\FAIL
Any exceptions...
::::::::::::::::Testcase 1 Ended:::::::::::

13. Check the below code for EX, i didnt see any code to see whether the vm creation or start\stop are really started or not, where did we checked whether the vms are started successfully? what if a code exception away from functional exception is raised. Are we assuming that raising an exception is synonymous with not starting vm, if so why? Why cant we check the start operation return code explicitly? If library call "start" is not returning proper status code for starting or not, raise an issue. This way, some body has to rewrite the code again for proper verification.

self.vm_1.start(self.apiclient)
566
self.vm_1.start(self.apiclient)
567
self.vm_2.start(self.apiclient)

14. Check the below code for EX, some times we deleted the vm and then went to check natrule list, how do we know that delete went successful? Some times we added a random delay? Why random delay? Write a simple poll api to check whether the api operation is successful or not? This is not the proper way and going ahead it may lead to lot of maintenance issues i believe?
self.vm_1.delete(self.apiclient)
620
self.vm_1.delete(self.apiclient)
621
self.vm_2.delete(self.apiclient)
621
self.vm_2.delete(self.apiclient)
622
except Exception as e:
622
except Exception as e:
623
self.fail("Failed to stop the virtual instances, %s" % e)
623
self.fail("Failed to stop the virtual instances, %s" % e)
624
624
625

  1. Check if the network rules still exists after Vm stop
    625
  2. Check if the network rules still exists after Vm stop
    626
    self.debug("Checking if NAT rules ")
    626
    self.debug("Checking if NAT rules ")
    627
    nat_rules = NATRule.list(
    627
    nat_rules = NATRule.list(
    628
    self.apiclient,
    628
    self.apiclient,
    629
    id=self.nat_rule.id,
    629
    id=self.nat_rule.id,
    630
    listall=True
    630
    listall=True
    631
    )

15. As mentioned earlier, put your test cases code as flow, step1, step2, step3, etc, though there are some doc strings ,that is not mapping to flow of code, can we please make sure this way it is added, so that a new user can added what steps are followedfor a test and what checks for each step?

16. Check the below for EX, not sure exactly what we are verifying, i could see that the code just executes some list API and thats it, thereafter what?
with self.assertRaises(Exception):
852
with self.assertRaises(Exception):
853
nat_rules = NATRule.list(
853
nat_rules = NATRule.list(
854
self.apiclient,
854
self.apiclient,
855
id=self.nat_rule.id,
855
id=self.nat_rule.id,
856
listall=True
856
listall=True
857
)
857
)
858
858
859
lb_rules = LoadBalancerRule.list(
859
lb_rules = LoadBalancerRule.list(
860
self.apiclient,
860
self.apiclient,
861
id=self.lb_rule.id,
861
id=self.lb_rule.id,
862
listall=True
862
listall=True
863
)

17. Lot of places, start,reboot,delete vms are called, but nowhere we verified whether that operation is successful or not? Does cloudstack API does not provide this facility or our library is missing this information? If so, can we enhance or add this information?

I believe it is the test code which is missing this call.

18. Please add try\except block wherever possible to major code flow for test cases, start try at the start and exception most before the return. Use some generic exceptions atleast to catch them and add to log. We will come to know at the end which test cases are throwing exceptions as such.

19. Make sure to clean up the resources even if there is an exception. Can we follow some proper pattern of clean up even if there is any exception?

20. Componentize the repetitive code as calls, EX: assertions, create, delete, list calls to reduce the number of lines.

21. Also, attach run logs with the review, this will help the reviewer see the logs map to the code and thus provide sufficient details for the test case steps working.

22. Use pylint, tabnanny and other code checker tools to run through the test code before commit. Follow pep8 and known coding standards as below both for naming convention and other standards. http://www.python.org/dev/peps/pep-0008/ http://google-styleguide.googlecode.com/svn/trunk/pyguide.html

23. Raise all exceptions including generic ones through CS related exception module, this way all exceptions for debug purpose can be logged or tracked through one single module, don’t write separate exception classes for each test.

24. Instead of adding multiple lines of asserts as below to the test code, Please use a simple "verifyElementInList" utility function added to cs/tools/integration/lib/utils.py. See the usage below.

self.assertEqual(
isinstance(list_network_offerings_response, list),
True,
"listNetworkOfferings returned invalid object in response."
)

self.assertNotEqual(
len(list_network_offerings_response),
0,
"listNetworkOfferings returned empty list."
)

self.assertEqual(
list_network_offerings_response0.state,
"Enabled",
"The network offering state should get updated to Enabled."
)

If we want to verify whether "a" is a list, its empty or not, then verify element 1 is available in the list and assert, then the below line ( generic example, change it to whichever assert we want ) should suffice.

assert verifyElementInList(a,1)0,PASS,"TC FAILED"

We just need to import verifyElementInList from utils before using it

25. While doing a review, if you are a reviewer, please raise a review issue, if there are no document strings enough to understand test feature code. Especially module level,class level document strings with specific usages or examples would be ideal, or wherever we see relevant documentation is missing.

26. Remove hard coded string values EX: values for ip addresses, urls, hostnames, and other configuration information EX: Services class from test feature code. Instead, separate it in a config file ( If test feature is named test_abc.py, then follow naming convention for feature configuration as test_abc.cfg ), place this configuration file to config folder under cs/tools/marvin/marvin. Once checked in, the new configuration is then available to test feature as a dictionary for usage by using ConfigManager Interface. Please use the new ConfigManager class support added under marvin/marvin/configGeneratory.py for this. See the below example.

'''
1. cloudstackTestClient will now have configuration object exposed
and available through its method getConfigParser()
2. The object will now have all options exposed through ConfigManager
3. cls.configObj will now provide ConfigManager interface
User can use this to parse and get the dictionary represenation
of his configuaration
'''
@classmethod
def setUpClass(cls):
cls.api_client = super(
TestAddMultipleNetScaler,
cls
).getClsTestClient().getApiClient()
cls.configObj = super(
TestAddMultipleNetScaler,
cls
).getConfigParser()
if cls.configObj is not None:
'''
Here, cls.configDict will now have all the configuration parsed and available as Dictionary
This Dictionary is available to total test feature code accessible wherever applicable
'''
cls.configDict = cls.configObj.getConfig(self, "/setup/config/myfeature.cfg", None )

Now, cls.configDict will have dictionary representation for your configuration and we can use it through out the code.

27. For better readability and maintainability, maintain all widely and more often used codes under codes.py ( new module added under cs/tools/marvin/marvin/codes.py ). See codes.py for specific example.

EX: Replace "Diabled","Enabled","network_offering" strings etc in test code, with readable and well named constants and notations, place them under codes.py. Then, use them uniformly across the code for test features, with these we need alteration at one place if required and change gets available to all features. We as well avoid different naming notations under different test features.

28. Use uniform naming and usage of object creation wherever possible. See the below example current usage of creating apiclient ( cls.api_client and self.apiclient ) as such both are same objects but the way, creation is made different. Instead ,we can use common way of creating and retrieving apiclient, use the way we created api_client under setUpClass as well for setUp method we want.

class TestPublicIP(cloudstackTestCase):

def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.services = Services().services

@classmethod
def setUpClass(cls):
cls.api_client = super(TestPublicIP, cls).getClsTestClient().getApiClient()
cls.services = Services().services

29. Use assertions wherever possible for product failures, avoid using

30. Any new method added, make sure to have proper return codes wherever possible. We are using exceptions for treating the fail case , Using exceptions has a different purpose and is good to have exception handling, but please dont assume that raising exception to be a return value for a function or method for noting failure. Instead, provide explicit return values for new code added wherever possible. Check the return codes and then raise assertion or exception accordingly.

31. Whenever we pick up a feature for automation, add a new field "Automation" to your test case document wherever it is ( excel,tms like quality center etc) , this will help to know during reviews and thereafter as which test cases are pending for automation for that feature even though feature may be automated.

32. Dont add asserts in library functions or non test functions. Add asserts\failures only in your test cases.

33. Add doc strings rather then comments to a test case as below. This will help to extract the test case doc strings easily and automatically for documentation or otherwise.

Instead of comments as below:

# Validate the following
# 1. Deploy a vm and create 5 data disk
# 2. Attach all the created Volume to the vm.
# 3. Reboot the VM. VM should be successfully rebooted
# 4. Stop the VM. Stop VM should be successful
# 5. Start The VM. Start VM should be successful

Use DocStrings:

'''

@Desc : test_abc_functionality

@steps:

 1. Deploy a vm and create 5 data disk
 2. Attach all the created Volume to the vm.
 3. Reboot the VM. VM should be successfully rebooted

 4. Stop the VM. Stop VM should be successful
 5. Start The VM. Start VM should be successful

'''

34: Catch exceptions in setup as well and In case of exception in setup, don't run tests, setup failures will not call tear down in many cases. So, clean up wont happen of resources we created in setup. We are seeing many clean up failures. We need to do clean up despite of tests ran or not, setup failed or not provided setup is run atleast once.

35.  We added runCommand to sshClient.py, instead of using current execute command, please use runCommand. The current execute command return contains output and error. It does not differentiate between output and error. The new command provides the status of command executed and corresponding output or error accordingly.

  • No labels