The Curiosity Blog

Model-Based Testing Can Lead the Way in IT Change

Written by Rich Jordan | 08 August 2023 13:00:00 Z

IT change remains a persistent struggle for most organisations today. Software teams are aware of the need to move faster and be more agile; yet, they are dealing with growing complexity and the weight of unknowns within the ecosystem of their current IT architecture estate. The misinterpretation of Agile principles has seen a culture where documentation (of which test design is a part) has fallen by the wayside. Fortunately, for teams who appreciate that software engineering is a complex, emergent discipline, there are techniques for turning this situation around.  

Testing is a key part of this solution. Testers can help uncover and formally document knowledge needed to: 

  1. Develop accurately and iteratively;  
  2. Understand the impact of change; 
  3. Apply automation and AI across the SDLC. 

Model-Based Testing (MBT) is becoming more popular with high-performing teams as an approach to deliver the outcomes that the business demands from Testing teams.  

This article considers two different approaches to Model-Based Testing, arguing that the right approach can achieve the quality at speed originally sought by “Agile” methodologies. 

Not all model-based testing tools are created equal 

Like many popular capabilities within Testing, the definitions of Model Based Testing (MBT) has been blurred by the different approaches given the same name. This is similar to the many deviations of TDD and BDD. 

This article aims to explain the difference between the two interpretations of Model Based Testing (MBT) that I am aware of, though I’m sure there are or will be deviations in future as MBT becomes more broadly used. 

The two broad interpretations of MBT are: 

  1. Model-Based Testing where the primary focus is to design visualize the system being developed  
  2. Model-Based Testing where the primary focus is to create test automation modules 

Let’s now consider the scope and value of both approaches to Model-Based Testing. 

Model-based testing for system design and visualisation 

The first understanding of model-based testing aims to design/visualize the system being developed. The models are used subsequently to derive test cases and automation. These tests are based upon system rules that are embedded in a living specification: 

A model visualizes and identifies routes by which data and users can flow throw a system. These “paths” are equivalent to auto-generated test cases. 

This approach offers a range of benefits for software delivery: 

  • Improved communication and collaboration among stakeholders 
  • Better understanding and visualization of system design and behaviour 
  • Increased efficiency and reduced errors (bugs) in design and development 
  • Improved system testing and verification 
  • Facilitation of model-driven development processes 
  • Ability to perform simulations and analyze system behaviour 
  • Improved system maintenance and evolution. 
  • Test case coverage can be optimized using algorithms purposed around risk, reducing the relative effort needed to find and fix bugs  

In this way, it helps resolve some of the core barriers to delivering increasingly complex systems at speed. 

Model-based testing for test automation generation 

The second approach to model-based testing starts later in the development lifecycle and has a narrower focus. 

It aims primarily to create test automation modules, which can then be pieced together and executed against a system under test. In the narrowest applications of this approach, system logic and requirements are not modelled, and nor are equivalence partitions. Instead, models effectively do the work of copy/pasting code into scripts. They chain together reusable automation libraries, with a model representing one or more automated test case. 

Overall, this approach offers several benefits for scaling test automation: 

  • The ability to create modular components for Test Automation. 
  • Improved accessibility for non-technical testers. Model create an abstraction layer above the automation code, enabling test script creation without coding skills.  
  • Increased volume of tests executed (although this is not always targeted and can create an analysis burden after execution). 

Modelling should not exacerbate test automation challenges 

However, this second, narrower approach does not offer the full benefits of model-based testing.  

It is not rooted directly in the system logic and so does not guarantee test optimization or coverage. It further starts too late to improve software requirements, nor does it help retain knowledge of complex systems, improve collaboration, and remove silos in software delivery. 

In fact, creating “low” or “no” code style models solely to generate automation code can inherit many of the issues of traditional test automation approaches: 

  • Test Coverage: The automation models might be very linear and “happy path” focused. Automated tests then may not cover all necessary scenarios and corner cases, leading to gaps in test coverage. 
  • Maintaining Test Scripts: Keeping test scripts up-to-date as the application changes can be difficult, though a good model-based solution should accelerate this by regenerating tests as the model changes. 
  • False Positives/Negatives: Automated tests may produce incorrect results, leading to false negatives or false positives. 
  • Test Environment Setup: Setting up the test environment can be time-consuming and complex. 
  • Test Data Management: Maintaining accurate and up-to-date test data is crucial for reliable automated testing. 
  • Flaky Tests: Flaky tests, which produce inconsistent results, can be difficult to detect and resolve.  
  • Test Execution Time: Automated tests can take a long time to run, especially as the number of tests increases. 
  • Debugging: Debugging automated tests can be challenging, especially when the tests are complex or the error is not immediately apparent. 

Can Artificial Intelligence help with these challenges? AI can broaden the amount of testing done. However, it isn’t a short cut to resolve these problems. On its own, the increased test volume can create an analysis overload, while many of the findings turn out to be superficial observations. 

Modelling to shift left - and right 

To unlock the full benefits of model-based, sufficient thought must instead be put into test design and test approach.  

Modelling system requirements and logic helps remove challenges in test automation, while offering benefits across software design and development. 

The system models collaboratively refine requirements, while linking test design to the requirements and code. The generated tests can therefore be optimized for test coverage, while test data and environments can additionally be spun up from the models:

This collaborative, “shift left” approach to modelling starts far earlier in the development cycle. It captures data and knowledge from across tools and teams, exposing it in a way that avoids technical debt, builds accurate requirements, and efficiently manages growing complexity.  

At the same time, the act of documentation drives accurate development and continuous test generation. The documentation therefore unlocks the very “Agile” methods that have historical lead organisations to ditch documentation in the first place.  

Modelling provides a “one input, many output” approach, in which the act of modelling generates, maintains and links the different artifacts needed for rapid development and testing. 

How helpful is industry guidance for test design? 

Many organizations will seek guidance when it comes to test tools. This might be out of choice, or due to organizational purchasing polices. You are likely familiar with different types of vendor comparisons.  

However, test design is rarely a category by which tools are compared and recommended, with the exception of some commendable research. Reviews, analysis and advise instead tends to focus on automation execution tools, test management tools and service providers. 

This is a problem, as test Design is a cornerstone of the success of any organization’s success in software testing, and for software delivery overall. It is therefore surprising that it lacks the guidance and support that you might expect. 

When you are reviewing your test approach, ensure that test design is front and center. Hopefully the points in this article can go some way to help you navigate what kind of problems you are trying to solve, and will motivate you to consider certain model-based testing tools that can solve them

Want to explore how model-based testing can unblock your software testing a delivery? Book a meeting to speak with a member of the Curiosity team