The Curiosity Blog

If testing was a race, data would win every time

Written by Thomas Pryce | 18 January 2021 15:32:08 Z

Okay, so that title doesn’t make complete sense. However, if you read to the end of this article, all will become clear. I’m first going to discuss some of the persistent barriers to in-sprint testing and development. I will then discuss a viable route to delivering rigorously tested systems in short sprints.

The two kingpins in this approach will be data and automation, working in tandem to convert insights about what needs testing into rigorous automated tests. But first, let’s consider why it remains so challenging to design, develop and test in-sprint.

20 years on, siloes remain THE challenge in software delivery

As the Agile Manifesto approaches 20 years old, the software delivery lifecycle remains riddled with silos. These silos not only create time-consuming miscommunication, they also amplify manual effort. Each time information moves from one silo to the next, it needs to be converted from one format to another:

These “information hops” delay releases and introduce defects as misinterpretation creeps in at every stage. Let’s now look at each silo in more detail.

Design

From a test and development perspective, gathering requirements in text-based documents and disparate diagrams is simply not fit for purpose. The fragmentary written user stories and documents are far removed from the precise logic that needs developing. Meanwhile, there is typically little or no formal dependency mapping between the text-based formats and static diagrams.

Development

Software designs therefore introduce bugs when translated into source code, in turn creating time-consuming rework. In fact, multiple studies estimate that requirements are responsible for over half of all defects,[i] while further research estimates that developers spend half their time fixing bugs.[ii] Design defects therefore take up a large chunk of the time that should be spent developing new functionality.

Testing

The static nature of requirements further increases manual effort in testing. “Flat” documents and diagrams are not ready built for automation, and often testers are forced to convert designs manually into test cases, data and scripts.

In addition to wasting time, these manual processes undermine quality. A simple system today will likely require thousands of tests before a release. Faced with informal and incomplete requirements, testers cannot systematically or automatically identify and create the tests that need executing before a release.

Manual test design instead focuses almost exclusively on “happy path” scenarios, over-testing these at the expense of scenarios most likely to cause bugs. Meanwhile, out-of-date and invalid tests pile up, creating test failures that push testing further behind releases.

Automation can help – but only if it extends beyond test execution

Automation can, of course, help accelerate many of these manual processes. However, “test automation” to date has focused near-exclusively on one task: executing tests. In so doing, it has introduced a raft of manual processes, while overlooking the key question of quality.

Test automation in many instances has introduced a new silo, along with all the time and effort associated with it. Manual processes introduced by test automation include copious test scripting, as well overwhelming script maintenance. Meanwhile, the speed and volume of test execution multiplies data provisioning bottlenecks, while out-of-date or inconsistent data leads to time-consuming test failures.

Automating test execution further does nothing to improve test quality, nor help with the identification of which tests to run. Test designers and automation engineers still face the challenge of having more system logic than they could ever execute in-sprint.

Prioritising the wrong tests not only wastes addition time in test scripting, it also exposes critical systems to damaging production defects. Test execution automation is therefore a critical component of in-sprint testing, but it is not a solution in itself.

“Data-driven” testing, but not as you know it

Fortunately, a solution is starting to present itself. It lies in data, and the ways that we can apply automation to data widely available today. This opens the door to prioritising tests accurately and automatically in-sprint, generating the tests needed before the next release.

The proliferation of (automated) technologies across DevOps toolchains has led to an associated proliferation of data. What’s more, this data is now being outputted in formats that can be captured and analysed.

If we combine this data with automated analysis and test generation, we can begin populating up-to-date test artefacts into the same tools from which the data was gathered. This then creates a closed feedback loop, collecting and analysing more data to drive rigorous testing in-sprint.

Let’s now look at the components of this “data-driven” approach to in-sprint testing.

Connectivity

The first prerequisite in this approach is connectivity between technologies across the application delivery lifecycle. If disparate tools cannot pass information between one another, silos will persist. There will not be enough data to analyse, and nor will it be possible to populate test suites across tools.

Connectivity between technologies is therefore paramount, and in-sprint technology must be built on open technologies. Fortunately, Robotic Process Automation and DevOps Orchestration tools can help to rapidly integrate disparate technologies.

Baseline data

The data produced by these disparate tools must furthermore be collected, feeding a baseline of data to harvest. Tools can then be applied to harvest insights from this single source of truth, informing testers of what needs testing in-sprint. Tools for analysis include, but are not limited to, AI and ML-based technologies.

In-sprint test and data generation

At this point, data has been gathered and analysed, indicating what needs testing in-sprint. But, how can we build and execute the tests required to act on this information?

The first component lies in automating test generation, linking this generation to the analysis of baseline data. The second lies in automating the generation and allocation of data based on the tests that have been generated. This can be achieved with tools that find and make data as tests run, providing rich and compliant data for every test on-the-fly.

Test Modeller

If you have all of these components, you have what Curiosity call Test Modeller. Test Modeller curates data from across the whole application development ecosystem, identifying exactly what needs testing in-sprint. It furthermore builds the tests and data needed to run those tests, using data-driven insights to achieve in-sprint test automation.

Rather than starting from the system requirements and working through a series of silos, Test Modeller analyses data from across the whole development ecosystem. It thereby informs and updates testing as the requirements or environment change, rather than playing a constant game of catch-up. In a nutshell, Test Modeller enables in-sprint test automation.

Get in touch if you’d like to discuss the project further.

Footnotes:

[i] P Mohan, A Udaya Shankar, K JayaSriDevi. “Quality Flaws: Issues and Challenges in Software Development”. Computer Engineering and Intelligent Systems 3, no. 12 (2012): 40-48), 44. www.iiste.org/Journals/index.php/CEIS/article/viewFile/3533/3581 on 30-May-2019. Bender RBT. Requirements Based Testing Process Overview (2009), 2, 16. http://benderrbt.com/Bender-Requirements%20Based%20Testing%20Process%20Overview.pdf. Soren Lausen and Otto Vinter. “Preventing  Requirement  Defects:  An  Experiment  in  ProcessImprovement”. Requirements Engineering 6 (2001): 37-50, 38. http://www.itu.dk/people/slauesen/Papers/PrevDefectsREJ.pdf

[ii] Tom Britton, Lisa Jeng, Graham Carver, Paul Cheak, Tomer Katzenellenbogen. “Reversible Debugging Software“. Report, created for the Judge Business School, University of Cambridge (2013), 5. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.370.9611&rep=rep1&type=pdf