The Curiosity Blog

5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

Written by Thomas Pryce | 24 July 2019 09:58:21 Z

Welcome to part 3/5 of 5 Reasons to Model During QA!

Part one of this series discussed how modelling enables “shift left” QA, eradicating potentially costly defects as they arise during the design phase.

Part two then shifted focus “right”, to testing code built from the requirements. It considered the significant time gains achieved by generating test cases, test scripts and test data automatically.

Model-Based Testing thereby makes it possible to test complex applications sufficiently, even within short iterations. Today’s article continues this theme, focusing in particular on the test coverage gains that accompany the increased testing efficiency.

Manual Test Creation Leaves Systems Exposed to Costly Defects

Manual test creation is not only slow and repetitive, but leads to the undesirable combination of over-testing and under-testing. Overall test coverage remains low, while certain logic is wastefully tested repeatedly. QA in turn does not mitigate the risk of damaging defects sufficiently, leaving a system exposed to costly bugs.

The sheer complexity of modern applications means that creating test cases manually and unsystematically cannot reach the coverage required for true quality assurance. Multi-tiered systems have a multitude of interrelated components, as demonstrated in the following dependency map:

A dependency map created from around 100,000 lines of C# code. This map only shows the relationships between the components in the system. The picture becomes vastly more complex once the intertwined logic contained in each component is factored in.

The above dependency map reflects an application with around 100,000 lines of code, and modern applications will typically contain millions of possible paths through their logic. This is more than any one person could comprehend in their head, and 2018 research suggests that 66% of organisations struggle “merely deciding what to test”.[1]

Manual test creation typically therefore undertests complex systems severely. The tests tend to pick off the most obvious, “happy” path scenarios first, testing these expected behaviours repeatedly. Negative scenarios and unexpected results go untested, when it is these outliers that can cause the most severe defects in production.

The result is resource-intensive, wasteful over-testing that nonetheless leaves systems exposed to bugs. Low test coverage persists even with test execution automation, as executing tests automatically does nothing to improve the quality of the test suite itself. Instead, a measurable and systematic approach to identifying what to test is needed, along with an efficient and systematic method for creating those tests.

Model-Based Test Generation: Systematic and Measurable

Model-Based Testing enable such a systematic and measurable approach to test case design. It harnesses the power of computer processing and the reliability of mathematic to identify all the tests contained in massively complex systems. This is possible even when the logic is greater than any human mind could comprehend.

Part two of this series set out how mathematically precise flowcharts enables the automated identification of every path through the models. Each logical journey is equivalent to a test case, and automated algorithms can therefore identify every test in the flowchart. Using Test Modeller, subflows can additionally be used to embed lower level components within master models, rapidly creating comprehensive test cases for complex systems:

Subflows integrate lower level functionality into master flowcharts, enabling rapid and
reliable test case generation for complex systems.

Generating tests from models introduces measurability to test design. Test coverage is proportional to the logic contained in model, and tests can be generated touch all the logic contained in the model at least once.

Multiple algorithms might be used, for instance testing every logical step (node) in the model, or covering every connecting “edge” (arrow) between the blocks at least once.

These techniques generate the smallest number of tests needed to cover the model, reducing testing time while still covering every positive and negative scenario. Testing in turn avoids wasteful over-testing, while still testing every distinct combination of logic and data once.

Generating tests from logical models further maximises observability, reducing the likelihood of false positives and of bugs masking bugs. Testers can instead know that their tests got the right result, for the right reason, providing true assurance of the quality of a system.

Targeted Testing: Reliable, Risk-Based Testing

It is rarely feasible to execute every test case associated with a complex system in a single iteration, and exhaustive testing should instead be reserved for the most high-risk, high visibility functionality. Fortunately, Model-Based Testing also enables reliable risk-based testing, focusing test creation on critical functionality.

Test Modeller makes reliable, risk-based test design possible. Several coverage profiles can be created for a given model, setting requisite coverage levels for tagged features. Automated test generation will then create the smallest set of tests needed to satisfy the coverage levels by feature, while testing the untagged logic in a model to a specified coverage level:

A coverage profile created for a login screen focuses on testing negative scenarios.
The generated test cases will target scenarios where invalid data is entered into the screen.
“Happy path” scenarios will be ignored, while logic contained in the surrounding model will
be tested to a medium level of coverage.

This granular approach to test coverage enables QA teams to focus testing on high-risk functionality. Testing might for instance focus on the negative paths that can cause the most severe defects in production. Coverage profiles might also be created for targeted regression, focusing on features that failed in the last test run, or on features that have been recently updated.

Model-Based Testing therefore provides the flexibility to dynamically explode test coverage, focusing in detail on given parts of the system. Combined with the improved efficiency of automated test creation, QA can test more functionality in short iterations while also mitigating against the negative risk of defects as much as possible.

This is particularly true after a change has been made to a complex and vast system, the subject of the next article in this series.

Join Curiosity and Jim Hazen for “In the beginning there was a model: Using requirements models to drive rigorous test automation”

[1] Vanson Bourne and Panaya (2018), survey of over 300 IT decision makers in the UK and US. Cited from Islam Soliman (2018), “AI & automation vs humans: the future of software testing?”, DevOpsOnline (16-11-18). Retrieved from http://www.devopsonline.co.uk/14159-2-ai-and-automation-vs-human-testers/ on 05-12-18.

[Image: Pixabay]