This plan is to test the new logging functionality presented in the Status Update Design.
This plan will define the various areas that must be tested to validate the new logging meets its requirements. This information will the be used during the subsequent technical design and development phases to ensure that the testing approaches defined in this plan are possible.
The plan will focus on two areas:
- Performance Testing
- Operation Testing
One of the biggest risks of adding more logging to the broker is the potential performance impact they will have in terms of a) creating the messages to log and b) actually logging the message. Therefore validating our changes have a negligible performance impact is key.
A series of test must be written that cover all the log messages that the broker will generate. The approach to validate any performance changes should then be to run the test multiple times to generate an average performance. The impact of the logging additions must be negligible. The performance test suite has shown that that there can be up to 5% variance between test runs so if this testing is to be performed as an automated system test we must be careful to ensure that we do not end up with test failures due to an unusually variance. Testing should be performed to establish a baseline from which we can determine what amount of variance occurs between the two test runs. This variance can then be used to determine the failure criteria. However, it should be noted that such a comparison technique will only ensure that the impact of the logging does not shift between each run. The test will not address any potential drift in performance of the broker as a whole, only the difference between logging on and logging off.
The test should run with interleaved test setups, i.e. Logging On then Loggging off. This will help minimise any external factors that could impact the timing. The time spent logging however, will still represent a very small percentage of time for a test case and as such the test will be more susceptible to external factors during the test run.
There are two components to testing the functionality of the new logging.
- Unit Testing
- System Testing
Unit testing must be completed on each module and code coverage should cover at least 80% (aiming towards 100%) of the code base. The unit testing however, will only verify that the module performs as expected with the use of a test output logger.
System testing will need to be performed to validate that the correct log messages appear in the right order when run as a full system. By using the test output logger used for the Unit testing it will be possible to validate an InVM broker correctly logs at the appropriate time. To complete system testing the log4j output from an external broker test run must be examined to ensure it contains the expected output. The alerting tests already perform this sort of external log validation so this should be easy to replicate.
Additionally, we will want to add tests to understand how the system behaves under certain failure conditions. This will not be required as part of this initial work. However, when we remove log4j we need to understand what the differences will with any new logging framework in failure situations, i.e. disk full, disk loss(crash, NFS delay). The difference with a disk loss (crash, NFS delay) is the write IO will not necessarily respond immediately while a disk full notification will usually respond immediately.
Performance Test Plan
The performance of the broker can be measured by the JMS Client by ensuring that all the updated modules are executed in the test run. The time it takes to startup create a new JMS Session and subscribe and the cleanly shut everything down will ensure all of the new log messages have the option of logging. Now the overhead in broker startup and shutdown will have a large effect on the test run however this should mean it will be easier to compare startup times as they should be very similar.
Operational Test Plan
The Functional Specificiation lists the various status log messages that the broker will be configured to produce. What this section details is how testing will be carried out for each of these log statements.
The Broker messages can be split in to two categories. Startup and shutdown messages. The testing here focuses on the Broker messages and ignore the fact that during a system testing of the broker other log messages will also be printed.
As all the startup messages are printed on startup then they cannot be printed individually. So by monitoring the logging output on startup we can verify that messages BKR-1001,1002,1004,1006,1007 are logged. It is expected that the output would be very similar to this:
Additionally testing should be performed with SSL enabled to verfify that the additional BRK-1002 message is logged.
On shutdown the broker will log a BRK-1003 for each interface that has been started. So if we have started both the TCP and the TCP/SSL listeners then we would expect both to be printed out before the Stopped message.
The management console also has a startup and shutdown phase like the broker.
The Management console uses two ports so both of these will be logged at startup. Two test runs must be performed to validate that the 'SSL Keystore' output is only present when correctly enabled.
During shutdown the two listeners created during startup will shutdown. The use of SSL will not provide any additional shutdown mesasage to verify.
Virtualhosts cannot be programatically created so the log messages will have to be found during the startup/shutdown of the broker.
During startup the following VHT messages will be logged as the Virtualhost is created.
On shutdown the Virutalhost will only log that it has been closed.
The MessageStore details will be logged as part of the Virtualhost startup and shutdown.
Aside from the easy to test create(MST-1001) and store location(MST-1002) messages, a persistent store such as DerbyDB will also need to be used for testing so that the recovery messages MST-1004-6 can be tested.
It is expected that on recovery start MST-1004 will be logged without a queue name. Then as each queue is recovered then a start(MST-1004), count (MST-1005) and complete (MST-1006) will be logged. Only when all queues have been recovered will a final complete (MST-1006) be logged. This would give a sequence such as:
On shutdown the MessageStore will only log that it has been closed.
New connections can be easily performed in isolation by connecting to the running broker. The connection open and close log messagse should be presented in response to the new connection and the closure of the connection.
As with Connection creation Channel creation can be tested in isolation. Whilst a channel will be created along with the Connection. A new channel can be created from the Java client by creating a new JMS Session. The current Java client will start all Channels on the JMS Connection flowed. This means that an initial 'CHN-1002 : Flow Stopped' message will be logged. Only when the JMS Connection is started will the Channel bun unflowed and a 'CHN-1002 : Flow Started' logged.
Additional testing should be done to ensure that CHN-1002 messages are logged when the client exceeds its prefetch count and the broker flows the client.
There are a number of properties that can be set on a Queue and as a reusult they all need to be tested in isolation and validated that the correct log message is generated based on this template:
Property combinations to test:
- Durable | Transient
This results in 8 tests as each Combination of AutoDelete|Priority|None is tested against Durable|Transient.
A simple deleted is logged when the queue is finally deleted.
As with Queue logging the Exchange has the Durable option that must be tested independently. During broker startup the default exchanges will be created and so will be logged. Testing should ensure that these log messages are indeed correctly logged.
However as Exchanges can be programatically declared this should also be tested.
Bindings will be created via JMS in the Java client along side Queue and Subscriber creation. The Qpid Java Client uses the Arguments field to ensure exclusive queues do not get filled with messages when a selector is in use. This adds another dimension to the testing as part of the system testing needs to include validation of the message prefix ( content between [ ]) as testing needs to cover the binding of Queues to a different exchanges and with a varienty of routing keys.
This gives us the following dimensions of testing that needs to be performed:
If we take bindings between amq.direct and amq.topic and vary the routing key (for Queues the default is the queue name when used with the Java Client). We have 4 test cases that we need to run on Exclusive and Non-Exclusive queues.
There are two types of subscription, durable and non-durable. The durable subscription can be tested from the Java Client by creating a JMS Durable Topic Subscription. Additionally both types of subscription can have an additional argument for the JMS selector. The non-durable subscription type can also operate as a JMS Queue Browser which has an additional 'autoclose' argument.
This makes 5 different possible Create (SUB-1001) log messages.
The next step is to provide an enumerated list of tests for completion.
Each test in the list will include:
- Functional description of what is being tested.
- Input(actions and/or data)
- Expected outputs:
- ... that will cause failure
- ... that can safely be ignored.