The Curiosity Blog

How Model-Based Testing Fulfils The promise of AI Testing

Written by Mantas Dvareckas | 31 January 2023 09:30:00 Z

There is no longer any doubt in the industry that test automation is beneficial to development; in fact, more than half of development teams have seen better quality and fewer defects when automating their tests [1].

However, the path to successful test automation is less clear for many teams. Given these challenges, many are exploring the use of Artificial Intelligence (AI) in test automation and quality assurance. In fact, 37% of teams say that they have already adopted the use of AI/ML in software testing, while a further 20% plan to introduce it this year [2].

The promise of AI/ML for test automation is substantial and varied. It includes reduced upfront effort, faster test creation, minimal test maintenance, better test coverage, and more. Fixing these challenges would in turn unlock greater release velocity, while still ensuring quality.

However, AI is not a silver bullet that will magically integrate into your SDLC and solve QA challenges. Much like how test automation came with new challenges and unfulfilled promises, the promises made for AI in testing have not yet come to fruition at many organisations. These organisations should first aim to address the root causes of testing gaps and bottlenecks in their delivery pipelines.

The Promise of Artificial Intelligence

What does Artificial Intelligence actually promise for software testing?

A key promise made comes in reducing the need to involve developers and testers in the most mundane and repetitive tasks, sometimes referred to as “toil”. This is a key area for improvement, as over a third of developers reported manual testing as the most time-consuming activity within a test cycle [3].

An AI solution could in theory review the current state of tests and recent code changes, deciding which tests to run to maximise testing’s impact based on time and risk. This would enable continuous test automation, the goal for many organisations today.

Individual promises made for AI in testing include:

  • Traceability
  • Self-healing
  • Low maintenance
  • Better targeting of tests
  • Reduction in test volume
  • Increased speed of delivery
  • Reduction in costs
  • Easier defect remediation

All of these benefits combine to enable a complete test automation solution. However, implementing “AI” and unlocking its benefits is not so simple.

The Challenges of Implementing AI in Testing

Setting expectations for the capabilities of AI tools is critical for organisations looking to invest in the technology. Before adoption, organisations must consider the range of challenges associated with implementing any tool that promises AI/ML in testing.

A factor often overlooked with AI-aided tools is that they are typically highly data dependent. To teach AI or unlock actionable insights, an incredible amount of data is required. With incomplete or inaccurate data, you risk a “garbage in, garbage out” scenario. AI might only succeed in helping you test worse, faster.

Smaller organisations and QA teams will often lack the required data to develop capable AI tools. This also extends to test data requirements, in which quality and compliant test data is crucial for test automation. Yet, many organisations still use incomplete and out-of-date production copies.

Furthermore, many teams will struggle with the complexity of implementing an AI solution. Ease of use is frequently overlooked due to the rose-tinted view that organisations currently have for the “magic” of AI. Organisations must evaluate their development and test teams’ capabilities. Ask yourself, does your team have the knowledge and experience required to build and maintain a complex AI solution? And does it have the prerequisite technologies, processes and data in place?

Additionally, organisations must consider the fact that their processes and tools across the SDLC are often disconnected, meaning AI tools can’t collect the data required to tell the whole story. They therefore won’t deliver features such as traceability, or effective test targeting.

AI technologies alone will never be the silver bullet to testing problems that they promise to be. Organisations must therefore consider another approach, one that’s easy to implement and has already proven to be effective. In fact, one such approach offers many of the results promised by AI.

Have you tried Model-Based Test Generation?

The fact that 91% of teams reported that they have automated less than half of their testing is exactly why a proven approach is required for scaling automated testing [4]. Model-based test generation offers such an approach.

Model-based testing does not need to be as complex as its name might suggest. It can instead leverage easy-to-understand, industry-standard BPMN flowcharts for modelling complex systems. Curiosity’s Test Modeller, for instances, uses visual flows to identify what needs testing, auto-generating the test cases, data and automation scripts needed to run those tests.

Test Modeller does this by putting modelling at the centre of your software delivery efforts. Visual modelling reduces the time and technical knowledge needed to create automated tests, while targeting the test coverage of automated testing. Generating tests becomes as simple as combining reusable flowcharts to map integrated system logic:

A visual flowchart used in Test Modeller, a model-based testing tool.

This modelling reduces manual test maintenance and creation, enabling rigorous in-sprint test automation. Making one-off changes in the reusable models regenerates the smallest possible set of test cases based on time and risk, avoiding technical debt and generating automation as new code is developed.

Building traceability between the models and your SDLC moves the test generation from in-sprint testing, to continuous testing. An extensive set of integrations and exporters builds traceability between Test Modeller’s flowcharts, and test cases, user stories, automated tests, and beyond. The application of traceability analysis can in turn identify changes across interrelated artifacts, updating central models to regenerate tests.

In advanced applications, this traceability automates both modelling and targeted test generation. For example, one organisation using Test Modeller automatically analyses artifacts produced by development, automatically combining reusable subflows to generate new tests following each code check-in.

In this way, modelling and automated test generation offers a way to fulfil much of the promise of AI for testing.

Start Automating Today!

Overall, the driving goal of Test Modeller is not to introduce AI into your SDLC for the sake of using a new tool. For automation, it aims is to minimise manual test maintenance, maximise the creation of valuable tests, and equip all tests with “just in time” test data.

Outside of automated testing, the same visual flows work to equip developers with accurate specifications, while fostering close collaboration between the “three amigos” of software delivery.

Test Modeller in these ways delivers the promises of AI in testing, and more:

  • Traceability
  • Ease of use
  • Improved Collaboration
  • Low maintenance
  • Better targeting of tests
  • Reduction in test volume
  • Increased speed of delivery
  • Reduction in costs
  • Built-in reusability
  • Just in time” test data
  • Easier defect remediation

Start automating smarter, not harder, today with a free 14-day trial of Test Modeller!

Want to learn more about how modelling can fulfil the promise of AI in testing? See how EVERFI auto-generate new tests following a code check-in, automatically analysing artifacts produced by development to generate end-to-end models and tests.

 
Footnotes:

[1] Sogeti, World Quality Report 2022-23. Retrieved from: https://www.sogeti.com/explore/reports/world-quality-report-2022-23/

[2] GitLab, 2022 Global DevSecOps Survey. Retrieved from: https://about.gitlab.com/developer-survey/

[3] Perfecto, 2022 State of Test Automation. Retrieved from: https://www.perfecto.io/resources/state-test-automation

[4] Ranorex, 2022 State of Software Testing Report. Retrieved from: https://www.ranorex.com/automated-testing-webinars/2022-software-testing/on-demand/