The Curiosity Blog

Evolving or Devolving? A Deep Dive into AI's Impact on Testing

Written by James Walker | 29 August 2023 12:45:00 Z

Since the initial launch of ChatGPT, interest in AI has exploded across almost every industry sector. The unique ability to solve problems by guessing one word at a time has empowered the creation of many new applications and solutions, for problems previously deemed impossible to solve.  Software testing is one such field where there has been an abundance of interest in AI, specifically in relation to automating the testing process.  

The future role of testers in light of these technological advancements has become a focal point. But, while AI-driven testing can offer efficiencies, there's an important question we must ask: Is it moving the testing industry forward, or pushing us backwards? 

Early applications of Generative AI to Testing 

One of the earliest applications of ChatGPT to testing lay in the creation of test cases from software requirements, and, subsequently, in crafting automation code. A simple prompt can be entered as follows for generating test cases: 

“Give me test cases for this feature; <requirement>”. 

Here's an illustrative prompt and response;  

“Give me test cases for this feature; A user should be able to register on the platform using their email address. After successful registration, a verification email should be sent to the registered email address. The user will only gain full access to the platform after verifying their email.” 

 

So far, we've seen the emergence of tools that convert user requirements into similar-styled prompts, populating the resulting test cases directly into testing tools and processes. 

One example are JIRA based apps, which take a user story and then populate test case management tools (like Xray or Zephyr) with generated tests cases. 

Some providers have pushed the envelope further, using it as a tool to generate automation page objects and automation test cases in a selected language. Here's an illustrative prompt and response for generating java selenium tests:  

"Create java selenium test cases for www.google.com" 

 

The Black Box of Generative AI 

This approach seems ground-breaking at first glance, and it truly is. The ability to take an unstructured piece of text and turn it into a structured test case without any human intervention is undeniably impressive.  

The problem is that this approach essentially operates as a black box: It generates results without any transparent understanding or control over its inner workings.  

By contrast, a good tester will apply SME knowledge, calculate a sufficient level of coverage to focus on areas of risk, and provide the results in a means which gives confidence to stakeholders. A principal disadvantage of generative AI for test generation is that is lacks these concepts, providing little insight into the test coverage or testing methodologies applied. 

When generative AI formulates test cases, it does so without any comprehension of the requirement's context or without contemplating the various scenarios and edge cases that a human tester would consider. This could potentially leave gaps in the test suite coverage, substantially affecting the software's quality and reliability.  

Essentially, we are delegating the task of test generation to a black-box machine, with no foundational methodology to guide the creation of tests. 

What is the point of testing?  

Let’s take a step back and think about the purpose of testing. Testing is more than just a step in the software development lifecycle. Testing is a mechanism for providing confidence to stakeholders that the software will function as expected, that it's reliable, and fulfils the initial software requirements.  

At Curiosity, we have a preference for the term 'Quality', as confidence can be achieved through various methods: Manual testing, automated testing, and, now, AI-based test generation. 

Does Generative AI-based testing fulfil the need of providing confidence to stakeholders? Not necessarily. There are numerous facets that AI-generated tests might neglect due to their deficient understanding of testing principles.  

It's crucial to remember that, while interacting with ChatGPT is intriguing, it is essentially making educated guesses, one word at a time. The current tools and processes utilising AI fall short in this vital respect, risking a regression in the industry. 

The Case for Visual Models and Model-Based Testing 

When we first observed approaches that took a prompt and generated test cases, we posed a question: is it necessary for AI to generate tests?  

There are many long-established testing methodologies and processes, many of which are widely recognized in the industry (TDD, BDD, and beyond). Therefore, might we only need AI to structure our resources in a way that enables us to capitalize on a sturdy set of existing testing methods? 

One promising approach lies in visual models and model-based testing. Visual models allow us to map out the system under test and identify various paths and scenarios. These visual representations can help ensure more comprehensive test coverage. 

Model-based testing takes this a step further. It leverages these visual models to generate test cases systematically. With a clear testing methodology in place, it ensures that different scenarios are considered and tested, including edge cases, which an AI might overlook.

A visual model automatically identifies a set of coverage-optimised tests (paths) through an application. 

Moreover, the transparency and interpretability of visual models and model-based testing stand in stark contrast to the black box of generative AI. The testing process becomes more predictable, reliable, and above all, provides the much-needed confidence that our software has been tested thoroughly. 

One incredibly promising avenue combines Generative AI with visual models and model-based testing. This uses Generative AI as a basis to create visual models, which can then be refined by users, including the use of self-critique functionality offered by Generative AI: 

Generative AI provides an accelerator for building visual models, which keep humans in the loop during optimised test generation. 

This approach encapsulates SME knowledge to build the models, which can then feed into the model-based testing pipeline to auto-generate coverage focused tests, populate test case management tools, enrich external databases, and generate test automation scripts. 

At curiosity we have produced ModelGPT, a tool which can take textual based requirements and use generative AI to create visual models, which can then feed into all the benefits of model-based testing

Evolution, Not Devolution 

While AI undeniably has a place in the future of software testing, it's vital to recognize its current limitations and work towards overcoming them. We cannot afford to allow our testing methodologies to devolve due to a lack of comprehensive understanding and methodological rigour. 

Visual models and model-based testing offer an effective way to ensure thorough testing, providing the confidence we need in our software while reaping all the benefits that Generative AI has to offer. 

Want to work with Curiosity to drive testing forward using generative AI? Book a time to talk to us today!