Overview

Currently there is no way to modify the behaviour of simulated resources. It always returns success response for all agent commands. The requirement is to have a way to selectively customise the agent command response for simulating various scenarios like VM deployment failure, VM deployment retry logic, user VM HA etc.

This will primarily enable developers to add end-to-end tests for their feature without relying on actual hardware. In the current set of automation tests mostly positive scenarios are tested as it is very complex to test negative scenarios with real hardware. With the ability to simulate failure, negative scenarios can be tested in an effective manner. Negative test scenarios will also help in improving the overall code coverage of the tests.

 

Jira: CLOUDSTACK-6445

Requirements

Customization/Mock

There needs to be a capability in the simulator framework to define mocks based on:

  • Response type
    • Success (already present)
    • Fail
    • Fault/Exception
    • Customised response (user providing JSON response for specific agent command as input)
  • Delay (already present)
  • Count (number of times mock needs to be executed)
  • Anything else as required

If no explicit mock is defined, the default behaviour is to return success. Also there needs to be a way to define the mocks at various scope like:

  • Zone
  • Pod
  • Cluster
  • Host

Verification

Once the customisation/mocks are defined and the tests run against it, there should be a mechanism to verify if the mock actually got executed or not.

Cleanup

Once existing mocks on a simulated resource has already been executed, there needs to be a way to clear them so that subsequent tests can run from a clean state.

Design

API

Each of the above requirements will be realised using a user level API. This will be similar to the way APIs like deployvirtualmachine, startvitrtualmachine etc. are defined. All the commands will be present in the already existing simulator plugin jar along with other implementation details.

  1. configureSimulator - This command already exists and will be enhanced to create a mock behaviour based on specified criteria.
    1. Inputs
      1. (Required) String cmdName - name of the agent command, e.g. StartCommand, StopCommand
      2. (Optional) Long zoneId - id of the zone
      3. (Optional) Long podId - id of the pod
      4. (Optional) Long clusterId - id of the cluster
      5. (Optional) Long hostId - id of the simulated resource
      6. (Optional) String responseType (Fail/Fault) - based on this will simulate a fixed failure, fixed fault. For simulating success there is no need to define an explicit mock, all agent commands returns success by default
      7. (Optional) String responseJSON - exact response expected by caller, mutually exclusive with responseType input. (This is the agent JSON response that can obtained from MS logs)
      8. (Optional) Long delay - delay in seconds before the response is returned. This can be used to simulate slow response or timeouts
      9. (Optional) Long count - number of times the mock should get executed. Every time a mock is successfully executed the count is decremented, when the count becomes 0 it is same as cleaning up the mock. A null value indicates that the mock will continue to be active unless explicitly cleaned up using the cleanup API below.
    2. Output
      1. Unique id for the mock. This can be used to access the mock later on
  2. queryMock - Query the status of the mock created using the previous API. This will be useful for finding out if the mock actually got executed or not during CS API execution that is being tested. The status of the mock will be updated if it gets executed.
    1. Inputs
      1. (Required) id - mock id returned from mock api call
    2. Output
      1. Mock details - one of the fields (count) will indicate status implying if the mock is actually executed or not. If at the end of test, count is 0 means the mock executed the required number of times. For mocks without any specified count it won't be possible to track if the mock executed or not.
  3. cleanupMock - Once done with the mock behaviour, there is a need to reset/cleanup mocks so that subsequent tests can start from a fresh state.
    1. Inputs
      1. (Required) id - mock id returned from mock api call
    2. Output
      1. Success/fail

DB changes

A separate database already exists for storing simulator related data. The same will be modified as appropriate.

How to write tests using these changes

Writing tests leveraging the simulator enhancements

TODOS

Currently the simulator uses a separate hypervisor type, need to see what all it takes to simulate real HVs like XS, KVM etc., various network/storage resources with minimal changes in the framework. 

 

  • No labels