The Curiosity Blog

Artificial Intelligence Used for Software Testing, Needs Testing?

Written by Mantas Dvareckas | 23 March 2021 10:30:00 Z

Artificial Intelligence (AI) and Machine Learning (ML) solutions for quality assurance are growing increasingly popular. Seen as the “next big thing”, AI/ML have become buzzwords in the industry. Many organisations have already begun implementing AI frameworks into their delivery lifecycles, and many are exploring the possibilities of using AI in the future. In fact, 88% of respondents to the 2020/21 World Quality Report stated that AI is now the strongest growth area of their test activities [1].  

It’s easy to see why AI/ML tools are in such high demand. The promise of reduced test maintenance, complete test automation and fast test creation is hard to pass onThe thought of AI magically solving all problems an organisation might face in testing makes AI tools an attractive option. This is again reflected in the World Quality Report. In which 86% of respondents stated that AI is now a key criterion when selecting new QA solutions, products and tools [1].  

AI might be seen as the future of quality assurance; however, setting expectations for the capabilities of AI tools is critical for organisations looking to invest in the technologyLet’s first consider the challenges associated with adopting AI in testing, before then discussing a solution. 

The Challenge of Using AI for Quality Assurance

Implementing AI tools isn’t as easy as pressing a few buttons and then letting the technology do the work. Developing and teaching AI is a complex task which requires substantial investment and resources. Additionally, there’s a lot of factors to consider when developing AI tools, that many organisations might be overlooking. 

Primarily, a factor often overlooked with AI tools is that they are very data dependant. To teach AI, an incredible amount of data is required. Without it, the tools are useless. Smaller organisations or QA teams lack the required data to develop capable AI tools. Furthermoreprocesses and tools used in the development lifecycle are often disconnected, meaning AI tools can’t collect the data needed to tell the whole storyIf you apply AI in this situation, you risk a “garbage in, garbage out” scenario.  

Secondly, AI tools are often used to identify elements of ‘how’ we can test, but less often focused on the harder question of knowing what we should test before the next release. For instance, AI tools might help you convert a web page into a bunch of test artefacts, but that doesn’t tell you what needs testing before the next release 

In other instances, using AI simply doesn’t make sense for the use case as organisations could more effectively and efficiently rely on rule-based logic to do all the work.  

Lastly, test data is also often overlooked in AI driven approaches to testing, but is crucial for effective test automation and AI tools.   

AI/ML technologies are rarely complete solutions to testing problems. Instead, they should be applied as tool within a larger solution. However, this leaves us with a key questionWhat does this larger solution do, and how does AI/ML feature within it? 

Complete Test and Data Automation

Curiosity’s Test Modeller leverages data from across the whole application delivery cycle to enable complete test automation in-sprint. This includes cutting-edge data analysis as one tool within a complete solution for prioritising and generating tests that matter before the next release. 

Test Modeller collects and analyses data from across DevOps pipelines, identifying and creating the tests that need running in-sprint. This comprehensive DevOps data analysis combines with automation far beyond test execution, including both test script generation and on-the-fly test data allocation. This way, Test Modeller exposes the impact of changing user stories and system change, prioritising and generating the tests that will have the greatest impact before the next release. 

Test Modeller in turn embeds AI/ML technologies within an approach to in-sprint test automation. This approach is built on the following components: 

  1. Connect: Test Modeller connects disparate technologies from across the development lifecycle, ensuring that there is sufficient data to identify and generate in-sprint tests. The Curiosity Test Modeller leverages a fully extendable DevOps integration engine to connect disparate tools. This gathers the data needed to inform in-sprint test generation, avoiding a “garbage in, garbage out” situation when adopting AI/ML technologies in testing. 
  2. Baseline: The rich data gathered from across DevOps toolchains feeds a Baseline” of real-time data. A Baseline aggregates, analyzes and converts observations into actionable insights. It exposes the latest changes in requirements, systems, environments, and user behaviours, informing testers of what needs testing in-sprint. The analysis might leverage AI/ML-based technologies where appropriate, but these are one tool in a robust toolbox. 
  3. In-Sprint: Test Modeller not only identifies what needs testing, it also generates the test cases and automation scripts needed to run those tests. Test Modeller doesn’t just tell you “how” to test; it tells you “what” to test in-sprint and provide accelerators to building those tests in short iterations. Test Modeller generates the in-sprint tests. 
  4. Pathfinder: Lastly, Test Modeller provides the data needed to run the in-sprint tests. Unlike outdated approaches to test data management, data is furthermore served up “just in time” as tests are generated and run. Test Data Automation provides this on-the-fly data resolution.

Shifting Focus to In-Sprint Testing

In short, Test Modeller creates central models to auto-generate test scripts for over 100 tools, complete with on-the-fly test data. This in-sprint automation might apply AI/ML where appropriate to identify tests, but in other scenarios alternative coverage algorithms might be more appropriate based on the data inputs.  

Overall, the driving goal of Test Modeller is not to use AI/ML for the sake of using AI/ML. The driving goal is to minimise manual test maintenance, maximise the creation of new tests where they are needed, and equip all tests with "just in time" test data.

Follow Curiosity on LinkedIn, Twitter, Facebook or subscribe to our YouTube channel. 

Footnote: 

[1] Capgemini (2020) World Quality Report 2020/21. https://www.capgemini.com/research/world-quality-report-wqr-20-21/