Welcome to part 2/5 of 5 Reasons to Model During QA!
Part one of this series discussed how formal modelling enables “shift left” QA. It discussed how modelling helps eradicate the majority of defects that emerge during the design phase, working to avoid costly and time-consuming rework. Flowchart modelling was also seen to be possible during short iterations, introducing all the benefits of formal modelling to Agile or hybrid environments.
Modelling the requirements therefore increases the likelihood that code will reflect the business needs first time round. Flowchart modelling also enables more rigorous testing further “right” in the delivery lifecycle, after the code has been written.
Model-Based Testing thereby avoids the bottlenecks created by manual case design, test data allocating and automated test scripting. This can drastically improve testing speed, all while optimising the generated test assets for greater test coverage.
Curiosity Software Ireland and Lemontree Present: “Five Reasons to Model During QA”
Automated Test Generation: Fast and Systematic Test Case Design
The mathematical precision of the flowchart models means that test cases can be generated directly from them. Flowcharts are directed graphs that map logical journeys from start points to end points in the model. Each path is equivalent to a test case that can be identified using automated graphical analysis. This mathematical analysis works like a GPS, identifying possible routes through a city map:
Automated test case design in Test Modeller automatically identifies test cases from
easy-to-use models.
Automated test generation significantly increases testing speed, removing the need to identify and create copious test cases. Even a simple system today contains thousands or even millions of paths through its logic, each of which could be a test. These tests must be repetitious in order to test a system fully, containing numerous overlapping test steps like clicking a certain button or filling in a given field. Manually creating tests for each distinct combination of user activity and data is simply too slow and labour-intensive for short iterations.
Models by contrast consolidate overlapping test steps, each of which only needs defining once as a node in the model. The blocks are then connected up, before applying algorithms to create every test case contained in the model automatically.
What’s more, Test Modeller provides connectors to synchronise the generated test suites with technologies across DevOps pipelines. Test cases and steps can be auto-populated in Application Management, Project Management, and CI/CD tools. This not only avoids the time spent creating repetitious test cases, it also removes the frustration of having to upload each test one-by-one to management tools.
“Just in Time” Test Data for Every Automated Test
Using Test Modeller, test data can be found or created automatically as test cases are generated from the model. This avoids bottlenecks created by manual test data provisioning, providing on demand and parallel access to test data for every test.
Test data can make or break testing
Efficient and rigorous testing depends on constant access to data with which to execute every test. However, test teams are still frequently provisioned with a limited number of large copies of production data, creating test data bottlenecks and undermining testing quality.
QA teams in this scenarios are forced to search through the vast production data for the exact data combinations they need. What’s worse, the production data contains only a fraction of the data needed for sufficient test coverage, lacking edge cases and the combinations needed to test new functionality.
Test Data Management: The ideal versus the reality.
Test teams are therefore frequently forced to create complex data by hand, wasting time and leading to test failures from inaccurate data. Further delays mount when useful data is lost after a data refresh, or is edited by another tester working with the same database.
Model-Based Testing can eradicate these test data bottlenecks, automatically finding or making test data for each test as it is created.
“Just in Time” data for every test
In Test Modeller, test data values and variables are assigned to each block in the flowchart model. This specifies the data needed to traverse each logical journey through the model:
Complete test data is assigned at the model level in Test Modeller.
Test Modeller then compiles the test data as test cases are automatically created, linking it to each test generated from the model. This lifts test data constraints, providing parallel teams with instant access to the data they need to execute tests.
The data defined at the model level can be either static or dynamic. Dynamic data definition creates synthetic test data as tests are generated, producing a diverse range of production-like values. Over 500 synthetic data generation functions can be combined and resolve “just in time” during automated test generation:
Complete test data is assigned at the model level in Test Modeller.
The dynamic data functions resolve one-by-one as each test is created, meaning that the test cases are linked to distinct and varied data. The data can furthermore include all the data needed for negative scenarios and outliers needed for rigorous testers, each of which can be defined easily and rapidly.
Automate Test Automation: Avoid Repetitious Scripting
Model-Based Testing, lastly, enables the generation of automated test code. This code will execute the test cases and data generated from the same model, eliminating another significant QA bottleneck.
Manual test creation kills test automation ROI
Test execution automation is necessary to run the number of functional regression tests required by modern applications, while testing types like Performance testing require automation.
However, automation frameworks often require slow and manual test scripting or otherwise on keyword configuration. Such manual test creation is simply not fast enough when thousands of new tests are introduced with each code commit, and automation engineers are constantly playing catch-up when they create automated tests by hand.
Automate test automation
Test Modeller by contrasts generates automated test code automatically as test cases are created from its models. Like test data values, automation logic is assigned to the flowchart models. This uses a simple-to-use, visual automation builder, creating tests with drop-down boxes and fill-in-the-blank fields:
A simple, “low code” approach to building automated tests.
Testers without coding backgrounds can use this approach to automate tests. They might use the standard automation recipes that are provided out-of-the-box by Test Modeller. Alternatively, Test Modeller can parse code created in manual, open source or homegrown frameworks. This makes the objects and actions re-usable in the “low code” automated test builder.
This approach combines the flexibility of coded frameworks with the simplicity of low code test automation. A small core of automation engineers can focus on feeding the new custom code needed to test complex systems, rather than on scripting repetitious tests. Broader QA teams can then re-use the code to auto-generate tests from easy-to-maintain flowcharts.
The flexibility of coded frameworks, the simplicity of “low code”. Anyone can automate their
tests with Test Modeller.
Test Asset Creation with Models: Fast and Comprehensive
Modelling enables QA teams to rapidly move from requirements to automated test suites and data. The models can furthermore be built rapidly in-sprint to avoid the bottlenecks of test asset creation. This is not only a significant time saver, but the systematic generation can significantly improve test coverage. This quality gain of Model-Based Testing is the focus of the next article in this series.