The Curiosity Blog

5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times

Written by Thomas Pryce | 07 August 2019 14:07:51 Z

Welcome to part 4/5 of 5 Reasons to Model During QA! If you have missed any previous instalments, use the following links to see how modelling can:

  1. Identify bugs during the requirements analysis and design phase, where they require far less time and cost to fix;

  2. Drive up testing efficiency, automating the creation of test cases, test data and automated test scripts;

  3. Maximise test coverage and shorten test cycles, focusing QA on the most critical, high risk functionality.

Model-Based Testing further enables testing to react to fast-changing applications, rapidly updating test suites to validate a change made to the code. This flexibility and resilience is the focus of today’s article, discussing how modelling accurately forecasts the complexity of a change and automates test maintenance.

Working with Change: Flexibility and Resilience

Software applications today are both massively complex and fast-changing. Short iterations bring code commits on a monthly, weekly, or daily basis, and QA must validate the success of each update.

These fast QA reaction times require an approach that is:

  1. Resilient:Testing must be able to maintain test coverage, identifying and creating all test assets needed to validate a change. This must happen in the same iteration as the change was made, otherwise chunks of the code will go untested.

  2. Flexible: QA must be able to maintain test suites at the pace of which applications change, adopting a reactive and flexible stance to test maintenance.

However, most organisations instead face an undesirable choice between QA flexibility and QA resilience. Their current testing practices mean there is not enough time in an iteration to identify, create, and execute every test required to validate a change made to the code.

Test teams cannot therefore fully test a change within the same iteration as the change was made, and code changes in turn risk costly defects in production. Fortunately, there is a way that QA can achieve the resilience and flexibility required to continuously implement change: modelling.

Common Barriers to Continuous Testing

Two broad barriers prevent QA from keeping up with the rate of change: identifying what needs to be tested after a change, and then updating or creating the test assets needed to validate the change.

Identifying what needs (re)testing

A change request today often constitutes a new user story or request that is sent to developers. These requests enters the bag of disparate and unconnected requirements that make up a system. These requirements were discussed in part one of this series.

The disparate requirements are not formally mapped to one another, and there is therefore no automated way to identify which parts of a system have been identified by a new change request. If a new Gherkin Specification is created, for instance, how can BAs, developers and testers reliable assess the impact of one Behaviour-Driven Scenario across the multitude of interrelated parts in a system?

The challenge of change requests.

Identifying the interdependent parts of a system that have been impacted by a system is often guesswork in this scenario, as is assessing the complexity of a change. Low priority changes can have unforeseen impacts across a complex system, requiring testing and development efforts that are disproportionate to the value of the change.

The responsibility of QA is to identify these problematic and unforeseen consequences of a change. However, testers also lack the ability to reliably identify the impact of a change, by virtue of formal dependency mapping in the requirements.

Slow and manual test maintenance

Then there’s time needed to create or update any tests required to validate a change.

Part two of this series set out the bottlenecks associated with manually creating test cases, test data and automated test scripts. Often, much of this effort is repeated after a change, forcing tests to roll-over constantly to the next iteration.

Manually created test assets are rarely traceable to the system designs, and nor are they typically linked to one another. Testers must therefore analyse existing tests one-by-one to identify the impact of a change on a regression pack. They must then update the test cases, test data and automated tests, keeping all three aligned.

QA teams must additionally create the tests needed to test new functionality, but there is little time in a sprint for both test maintenance and manual test creation.

Alternatively, invalid tests might go unchecked, piling up in the regression pack. These invalid tests will then throw up defects when there are no genuine bugs in the code, while bad test data will destabilise test automation frameworks.

A new approach is instead required, identifying the impact of changes made to a system and reflecting them efficiently in test suites.

Reactive Test Automation

Model-Based Testing, with the right tools and techniques, introduces the flexibility needed to update test assets in time, as well as the QA resilience to continually test with rigour. The impact of changes can further be forecast in advance, enabling evidentially-based software design decisions.

Avoiding time and scope creep

Flowchart modelling first enables you to measure the impact of a change in advance. BAs, developers and testers can rapidly incorporate a change request into the model, or might make the request using the model itself. The paths through the updated model can then be identified automatically using mathematical algorithms.

These paths are equivalent to tests, and the number of paths impacted thereby provides a test-driven measure of complexity. This offers a reliable and standardised way to measure the relative value of a change against the impact of implementing it.

This impact analysis can extend beyond individual components, using subflows to create dependency maps of a system. The subflows group lower level logic in master flows. The impact of a change made to one subprocess is then identifiable in the master flowcharts, as well as downstream in the child models.

Automated test maintenance

Modelling also removes the second challenge of change for QA: manual test maintenance.

Parts two and three of this series discussed how test cases, test data and test scripts are all traceable to the models from which they were generated. As those models change, the optimised test suite is regenerated rapidly. Test teams might further create coverage profiles to target the affected logic with a greater degree of rigour, focusing regression on logic impacted by the last code commit.

Reactive test automation, driven by central flowchart models.

QA in this approach becomes an automated comparison of the logic specified in the models, to the system reflected in the code. As an organisation’s understanding of the ideal system changes, the models evolve. This auto-updates the rigorous test suite that is tied to those models, continuously testing fast-changing systems.

Regenerating a set of linked test cases, data and scripts is generally far quicker and more reliable than attempting to keep individual assets aligned manually. Modelling therefore provides the QA resilience and QA flexibility needed to deliver fast-changing applications that accurately reflect the latest business requirements.

Join Curiosity and Jim Hazen for “In the beginning there was a model: Using requirements models to drive rigorous test automation”

[Image: Pixabay]