The Curiosity Blog

10 Common Concerns About Model-Based Testing

Written by James Walker | 12 November 2020 10:55:53 Z

We rarely post ‘product’ articles here at Curiosity, preferring instead to draw on our team’s thought and expertise. This article is no different, though it does discuss certain features in Test Modeller.

However, the following article is intended primarily as a guide to common concerns regarding model-based testing, discussing the ways in which we’ve responded to them in Test Modeller. All the features discussed draw on Curiosity’s decades of collective experience in building model-based solutions. Each has been designed to remedy common challenges when implementing model-based test design, shortening it’s time to value while amplifying its benefits.

The features discussed in this article, I hope, will therefore give you some food for thought around the role of modelling in software delivery, while also highlighting some technologies that we think are pretty new and exciting.

The article then concludes by asking a big question: Is it time to leave the word ‘model’ behind, embracing language more befitting of today’s technologies? We’d love to hear from you on this question – please take a minute to vote in our poll, letting us know what you think!

1.    Accelerators for in-sprint modelling

Common Concern #1: “There’s no way we have the time or expertise needed to model our systems.”

This is one of the most common concerns with model-based testing, and it makes a large degree of sense: How can you build models that are detailed enough to generate tests for complex systems, and yet build them quickly enough to test in-sprint?

With this in mind, Curiosity have designed Test Modeller to be an open technology, equipped with accelerators that shorten modelling time using existing assets in your software development lifecycle. You can import BPMN diagrams, Gherkin specifications, manual test cases, and much more. You can also use a UI Scanner and dictionary of modelling rules to rapidly build models, further reducing time to value.

2.    Seamless test automation

Concern #2: “Generated tests will never match the flexibility or sophistication of scripting.

Often teams worry that generating tests will limit their flexibility, undermining their ability to test complex systems in detail. We’ve therefore built Test Modeller as an accelerator to scripted test automation. It works seamlessly with coded frameworks, synchronising code from homegrown, open source and commercial frameworks. Code templates further generate bespoke scripts into code repositories.

This approach combines the simplicity of “low code” automation with the granularity of scripting. It allows coders to focus on the tricky bits of automation and non-coders to generate accurate tests for complex systems, all without requiring fiddly configuration.

3.    Simple models for testing complex systems

Concern #3: “My systems are too complex for visual models. It will become too cumbersome or too fiddley to model their intricate logic, and we will spend more time than it’s worth creating and maintaining logic.”

To avoid this challenge, we’ve equipped Test Modeller with a wide range of techniques for reflecting complex logic in visual flowcharts, all without compromising the simplicity of the models. You can easily overlay rules and constraints onto the flows, while using subflows to model reusable components. These tools are all designed to be lightweight and easy to apply, generating industrial-strength tests without producing overly-fiddly models

4.    Highly expressive test design

Concern #4: “We don’t have time to test everything in-sprint. How can you target test generation to fit testing in-sprint, without introducing negative risk?”

We’ve built Test Modeller with highly expressive techniques for targeting test generation based on time and risk. You don’t need to test everything and can measure what you are testing. For instance, you can apply coverage algorithms to components or end-to-end models, generating the smallest set of tests needed to “cover” the modelled logic. Feature Tags further focus test design on certain logic. These can be used to target negative testing or align your test suite with the high risk areas of the application.

5.    Test Data Built-In

Concern #5: “Visual models are good for testing simple UIs, but the backbone of our application is its APIs and back-end. There’s no way we can model these easily.” 

For most organisations today, testing requires consistently linked-up data journeys, capable of traversing mazes of interrelated components. In fact, the distinction between test case and data has become blurred, as tests involve firing vastly complex data journeys through systems, measuring their impact on the way. If the data’s misaligned, tests will fail. If it’s lacking in variety, testing will not be of sufficient coverage.

This is why we’ve built Test Modeller to integrate model-based test design seamlessly with on-the-fly test data resolution. To do this, we integrated Test Modeller closely with our own Test Data Automation, which in turn integrates with homegrown and commercial test data routines.

This approach builds a library of re-usable data “Find and Makes” that resolve as tests are generated and run from Test Modeller. The re-usable data lookups are arranged at the model level. During test generation, Test Modeller then passes variables from one lookup to the next, creating consistently chained-up data journeys. These data journeys might traverse a series of linked up UI screens or could equally be a web of APIs and back-end systems.

In this approach, any missing variables are generated on-the-fly, providing the volumes and variety of unique combinations needed to rigorously test complex systems. Meanwhile, parallelised automation avoids data clashes and data provisioning bottlenecks.

Automated data “Find and Makes” resolve as tests are generated, producing consistent
data sets for tests.

6.    Built to scale

Concern #6: “Modelling will add a maintenance overhead. There’s no way we can update models every time something in our system changes and modelling simply will not scale for a system of our complexity.”

To test complex systems quickly, Test Modeller emphasises re-usability and traceability. Every model becomes a re-usable subprocess, making testing faster over time. Building models for new functionality or for integration and system testing then becomes as quick and simple as assembling visual blocks into master models.

If a change is made to one model, it ripples across every flowchart in which that model features. This makes in-sprint testing a reality, as making updates in central models replaces maintaining copious test cases and scripts. Meanwhile, it supports scalability: you don’t need to update logic in every place that it occurs, but can instead update one modelled subprocess.

Re-usable subflows simplify and accelerate end-to-end and integration testing.

7.    Re-usability across stakeholders and functions

Concern #7: “I’d prefer to be working in my automation framework or test management tool – I don’t want to also have to tend to models in addition to that.”

As well as tackling system complexity, this re-usability minimises duplicate effort. It enables cross-function teams to test and develop at speed, combating silos and the delays created by cross-team constraints. Teams can continue working in their preferred tools and formats, while making the product of their work available to others.

For instance, testers and designers without deep coding skills can re-use automation built by those with coding skills. This reduces the burden on automation engineers, who can focus on feeding new code into their frameworks. Similarly, cross-functional teams can re-use test data lookups from a central catalogue. This eliminates the delays created by siloed data provisioning, while again minimising duplicate effort and the burden on any one team.

8.    More than ‘just another test tool’

Concern #8: “Model-based testing is a bygone from the days of Waterfall projects with siloed testing. We’re a cross-functional team, delivering in short iterations – we can’t model out cause-and-effect logic for our system.”

Overall, we’ve built Test Modeller for modern software delivery best practices, supporting cross-functional collaboration and parallelisation. Its BPMN-style flowcharts are already familiar to business users and system designers, while testers and automation engineers can easily integrate test cases, automated tests and test data routines. The central flowcharts then become a collaboration piece, avoiding miscommunication and the rework it creates. Meanwhile feature teams work in parallel, developing code and designing automated tests from the same specifications.

9.    An open technology – no vendor lock!

Concern #9: “Any proprietary approach to generating tests will lock us in. We’ll be limited by the functions produced by the vendor and soon we will not be able to test our systems rigorously. Nor will we be able to adopt the tools we want, including best-of-breed open source technologies.”

The concern about vendor lock is a big one, and rightly so – it’s not only frustrating when we can’t use the tools we want, but it can also impair testing’s ability to adapt to new systems and approaches.

We’ve therefore designed Test Modeller to be a fully extensible, open technology. It comes equipped with a wide-range of out-of-the-box integrations, while API connectivity and Curiosity’s in-house integration engine quickly connect Test Modeller to new technologies. This allows software testers, developers and designers to work in the tools best for them, while still pooling skills and knowledge collaboratively in models. It further enables a “Single Pane of Glass” approach, generating assets into a wide range of integrated tools.

This extensibility avoids vendor lock and future proofs software delivery. Teams can seamlessly move to new technologies, integrating them into Test Modeller. Meanwhile, all generated assets become available in their preferred technologies, so that testing and development is never limited by any one tool.

10. In-Sprint Maintenance

Concern #10: “Modelling will be a time-consuming process that we can’t afford to introduce – we have enough things to do already!”

One significant benefit of the approach taken in Test Modeller has been indicated but is worth emphasising: Test Modeller can eliminate the maintenance of existing test scripts, data and test cases. All these assets are instead linked to central flowcharts, and update as the models change.

Instead of trying to align mountains of repetitive tests, testers can update a model in one place, re-generating rigorous test suites. This reactive automation helps to combat technical debt, making sure that testing occurs in the same sprint as system changes.

Modelling in this approach becomes an accelerator to testing, rather than an addition process. Teams can focus on developing and testing new logic, rapidly regenerating targeted regression suites in-sprint.

When is a model no longer a model?

In spite of all the changes, updates and new technology we’ve introduced, Curiosity face a recurring challenge when discussing Test Modeller. The term ‘model’ means a lot of different things to different people, frequently bringing negative associations.

For some people, the idea of ‘modelling’ will mean too much complexity and too much effort. For others, ‘models’ sound like an over-simplification, one that will prevent them testing complex systems in detail. For many people, ‘modelling’ suggests a huge upfront time investment, and subsequently an additional process to reckon with.

When Curiosity built Test Modeller, we were aware of these concerns and more, and crafted capabilities to respond to them. Our team brought decades of collective experience in model-based testing, building Test Modeller to a be a flexible and collaborative tool. We purposefully built it to be capable of testing complex systems rapidly, all while maximising re-usability and simplifying test design.

Yet, with all the updated technologies and techniques discussed in this article, the term ‘model’ has remained the same. When discussing Test Modeller as a ‘modelling’ tool, we often therefore find ourselves first discussing the many things that our approach is not, when we would rather be discussing what our proposed approach is.

With this in mind, we’re currently asking ourselves: “is it time to leave the word ‘model’ behind”? Has the technology moved on, whereas the word ‘model’ remains too closely tied to the limitations of past approaches? Does ‘model’ have too much baggage?

This is no easy decision, especially given that most of our team have been involved in the “model-based” space for several years or decades. First, we’d like to hear from you to get your thoughts.

Currently, we are discussing the ‘models’ of Test Modeller as ‘flows’: a human-readable, easy to manage asset that maps and tests complex system logic. What do you think? Should we ditch the language of ‘model’ and make the leap into the new world of flow-driven testing?

Take 2 minutes to to vote in our poll – we’d love to hear from you!