Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

7 min read

Navigating the maze of complex API calls with Model-Based Testing

Navigating the maze of complex API calls with Model-Based Testing

APIs are the lifeblood of modern software systems. They enable organisations to reach across technologies and their users, rapidly exposing systems and services to new customers.

APIs furthermore allow organisations to innovate rapidly, while allowing other businesses to incorporate their tech. Developers can use APIs to assemble existing building blocks rather than re-invent the wheel for each new piece of functionality. APIs therefore future proof businesses, making it far easier to incorporate new and potentially disruptive technologies.

However, this flexibility for developers and business often creates massive complexity for testers. This complexity is set out below, calling for a systematic and automated approach to rigorous API testing. A practical guide to achieving this approach is set out in full in the latest Curiosity-Parasoft eBook.

download The eBook

API Testing Complexity

The APIs that organisations rely on today often go undertested, exposing business-critical systems to damaging defects. A primary reason for this undertesting is the use of traditional testing techniques that are no match for the complexity of API chains.

This complexity stems from the numerous factors that must be reflected in API tests. Consider the combinations of data that API tests must reflect. The message data required to hit just one endpoint must contain multiple data variables. The data values might include:

  1. User inputted data, such as the information users enter into a website or application, as well as the unique decisions they make when exercising the system’s logic.

  2. User generated data. This is the machine data that is generated by the user activity. It includes content-type, session IDs, authentication-headers, user-agents, and more.

The data set needed for rigorous API testing will therefore contain a vast number of data combinations. These combinations must reflect the full range of valid and invalid values, both user and machine generated. That typically equates to thousands or millions of distinct combinations.

However, API testing cannot stop there. Rigorous API testing also requires tests that “cover” the full range of logically distinct journeys through API calls. In other words, it must test the combinations of API method or action that can transform data on its way to given endpoints.API Testing Complexity

Figure 1: API testing complexity.

It gets more complicated yet. APIs do not exist in isolation as they send data across chains of interrelated components. Any API test is therefore “end-to-end” in some sense and rigorous API testing must test the combinations of actions and methods that exist across multiple APIs.

Rigorous API testing therefore requires a set of data that reflects the full range of user inputted data, as well as machine generated data. It must further reflect the full range of combined actions that can be performed on that data as it passes through multiple APIs.

This leads to a giant number of possible test cases. In the simplified example shown in Figure 1, testing a chain of API calls must pick from:

  1. 1000 distinct combinations of data that a user can enter;

  2. 1000 distinct combinations of machine data that user activity can generate;

  3. 1000 logically distinct journeys through the actions of connected-up APIs.

That already leads to 1000 x 1000 x 1000 combinations – or one billion possible test cases to choose from. The first challenge for API testing then is deciding which of these possible to tests to execute, as there is no time to execute every API test in-sprint.

API Undertesting: Test Design gets Lost in the Labyrinth

Testing APIs rigorously in-sprint forces testers to reduce the number of test cases that need to be executed, while not compromising API test coverage. This requires a systematic and automated approach, capable of first identifying the vast number of distinct combinations involved across chains of APIs, and then selecting the most critical or high-risk.

The challenge is that API testing often relies on traditional testing techniques that are no match for API complexity. The unsystematic, overly manual techniques can neither identify nor execute API tests with sufficient coverage in-sprint.

These testing techniques undermine API testing speed, rigour and reliability at numerous points:

  1. API test case design: Testers still frequently attempt to identify API tests manually, relying on incomplete documentation and highly technical specifications. However, there are more combinations than any one person can possibly hold in their head, let alone formulate one-by-one in a test management tool. The result is unacceptably low coverage, executing just a fraction of the distinct combinations contained across API chains.

  2. Hard-to-Define Expected results: Testers often also second guess how APIs should respond to certain Requests. These Requests are designed for computer consumption, and the associated Responses are therefore often hard to define. Guessing expected results for thousands of possible tests in undermines the reliability of API testing.

  3. Incomplete test data: The rich variety of data needed for rigorous API testing is rarely available readily to testers. This is because they are still typically provided with copies of production data. This production data contains just a fraction of the combinations needed for rigorous API testing. It is highly repetitious, having been generated by users who usually behave as the system intends. The data is furthermore drawn from past user behaviour and cannot therefore test new functionality.

  4. Slow and erroneous test creation: Test teams are therefore forced to find or manually create many of the data combinations needed for rigorous API testing. However, the data is highly complex as it must link across multiple systems. Manual data creation frequently produces invalid data sets, undermining the reliability and stability of automated API testing.

  5. Unrepresentative Environments: Test execution furthermore requires access to the end-to-end components called by APIs, including both in-house and third-party applications. These components might be unavailable for a variety of reasons. They might be unfinished or in use by another test team, while a third-party might not provide a sandbox for testing.

Many organisations rightly use virtualization to simulate these unavailable components. However, this returns to the challenge of creating virtual data, as well as defining expected Response for each Request. Testers are often forced to script complex behaviours manually, and inaccurate Request-Response pairings undermine testing accuracy.

Traditional testing techniques are too ad hoc and manual to test APIs rigorously and efficiently. Overcoming the complexity of modern systems instead requires an approach to testing that is automated and systematic.

Model-Based Testing: Navigating the Labyrinth of API Call Chains

Fortunately, the very factors that lead to its complexity make API testing perfect for the combination of Model-Based Testing and Test Data Management. Such an approach is set out in full in the latest Curiosity-Parasasoft eBook. The eBook also demonstrates how integrating virtualization facilitates rigorous and automated API testing in-sprint.

The model-based approach introduces measurability and structure at every stage of the API testing lifecycle, following the steps set out below.

  1. Test Case Generation

Model-Based Testing first enables automated and systematic test case design, applying mathematical algorithms to create optimised API tests. 

Flowchart modelling allows QA teams to break down the logic of API calls into digestible chunks, modelling each decision gate and equivalence class of data. This visual modelling already works to overcome the potentially overwhelming complexity of APIs, creating models that reflect the full range of journeys through them.

The integration between Test Modeller, Parasoft Virtualize and SOAtest furthermore automates this modelling process. It builds initial models from imported service definitions and recorded message traffic, making model-based API testing possible within tight iterations.

Using re-usable subflows then makes it possible to overcome the complexity of testing API call chains. Defining end-to-end test scenarios is as simple as dragging, dropping, and assembling the models:Automated API test case generation for joined-up components.

Figure 2 – automated API test case generation for joined-up components.

The mathematical precision of the models enables automated test design, using mathematical “coverage” algorithms to identify the logically distinct journeys through API call chains. This is shown in Figure 2 above.

This systematic test generation reduces the total number of tests without compromising logical test coverage. It reduces the total number of API tests while still testing every logically distinct combination of data and action.

A complete set of API tests can thereby be executed in-sprint. Executing the optimised API tests is furthermore another automated process. QA teams simply need to hit the “play” button shown in Figure 2, selecting a pre-defined process for exporting tests to Parasoft SOAtest.

  1. Expected Results and Test Data Built-In

Model-Based generation is not only faster and more rigorous than manual API test design, it also overcomes the challenge of formulating expected results and data. Both are included at the model level, generating them systematically alongside the rigorous tests.

Expected results are simply the end blocks in the connected-up models. If testing the validation of Request data, for instance, the model end blocks specify when data should be accepted or rejected.

Test Modeller furthermore provides a variety of techniques creating complete data at the same time as rigorous API tests.

One approach specifies data values at the model level. This defines variables and values for each relevant block in the model. Data definition either provides a static value for each block, or selects from over 500 combinable data generation functions in an intuitive Data Editor:Dynamic Test Data Definition

Figure 3 – Over 500 combinable data functions generate data dynamically for every API test.

Dynamic data definition generates a rich variety of API test data, working to maximise test coverage. The integration between Test Modeller and Test Data Automation additionally enables repeatable Test Data Management processes to be specified at the model level.

This embeds a full range of test data utilities within automated API test generation, including data look-ups, subsetting, masking, generation, and more:API Test Data Catalogue

Figure 4 – The test data catalogue incorporates a full range of TDM utilities
within standard test automation.

The test data functions and processes defined at the model level resolve “just in time” during test creation. This compiles up-to-date, coherent data sets that match each and every API test.

Synthetic data generation furthermore creates the combinations not found among existing sources, including the unexpected scenarios and negative results needed for rigorous API testing. QA teams can thereby avoid slow and error-prone data allocation, hitting every scenario needed for rigorous API testing.

  1. On demand test environments

Model-Based Generation furthermore creates the data needed for accurate virtualization, providing on demand environments in which to execute every test.

The integration between Parasoft Virtualize and Test Data Automation generates a “covered” set of RR pairs, creating accurate Responses for every possible Request.

Test Data Automation “explodes” the coverage a foundational data set created by Parasoft Virtualize. This foundational data is built rapidly from recordings and imported service descriptions, while Virtualize provides a visual and intuitive toolset for defining data for complex behaviours:Visual Virtual Data Builder

Figure 5 – A visual builder takes the complexity out of defining virtual data.

The process used to generate and optimise virtual data becomes re-usable within Test Modeller, and are available in the Test Data catalogue shown in Figure 4. QA teams can therefore define virtualization at the model level, simultaneously generating automated API tests and the virtual data needed to execute them.

Parasoft Virtualize furthermore empowers testers themselves to deploy virtual assets from an easy-to-use visual interface. This spins up environments on demand, enabling continuous API testing without cross-team or upstream constraints.

Continuous API Testing

This integrated approach ties everything needed for rigorous API testing to central models. QA teams can create everything they need for API testing automatically and on demand, using easy-to-maintain flowcharts to generate:

  1. Automated API tests and data, optimised to “cover” every distinct journey through complex chains of APIs;

  2. The expected results needed to validate reliably whether the API tests have passed or failed;

  3. The virtual data needed for comprehensive virtualization, providing realistic test environments that can be deployed on demand.

Don’t let the APIs that your business relies on go under-tested. Download the latest Curiosity-Parasoft eBook for practical guide to implementing the approach set out in this article – start testing your APIs rigorously today!

download The eBook

Containers for Continuous Testing

Containers for Continuous Testing

Application development and testing has been revolutionised in the past several years with artifact and package repositories, enabling delivery of...

Read More
Announcing Model GPT: Generative AI for enterprise software delivery

Announcing Model GPT: Generative AI for enterprise software delivery

The new tool scales generative AI throughout DevOps and CI/CD, providing visibility, optimal test generation, pipeline integration and cross-team...

Read More
Chat to Your Requirements: Our Journey Applying Generative AI

Chat to Your Requirements: Our Journey Applying Generative AI

In the digital age, large enterprises are plagued by a lack of understanding of their legacy systems and processes. Knowledge becomes isolated in...

Read More
Bringing Clarity to Complexity: Visual Models in Requirements Engineering

Bringing Clarity to Complexity: Visual Models in Requirements Engineering

In the dynamic, interconnected world of software development, clarity is key. Yet, requirements engineering - the process of defining, documenting,...

Read More
5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

Welcome to part 3/5 of 5 Reasons to Model During QA! Part one of this series discussed how modelling enables “shift left” QA, eradicating potentially...

Read More
5 Reasons to Model During QA, Part 5/5

5 Reasons to Model During QA, Part 5/5

Welcome to the final instalment of 5 Reasons to Model During QA! If you have missed any of the previous four articles, jump back in to find out how...

Read More
5 Reasons to Model During QA, Part 2/5: Automated Test Generation

5 Reasons to Model During QA, Part 2/5: Automated Test Generation

Welcome to part 2/5 of 5 Reasons to Model During QA! Part one of this series discussed how formal modelling enables “shift left” QA. It discussed how...

Read More
Model-Based Testing for Microsoft Dynamics 365

Model-Based Testing for Microsoft Dynamics 365

Microsoft Dynamics 365 is a highly versatile and powerful tool for enterprise resource planning (ERP) and customer relationship management (CRM). A...

Read More
5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the...

Read More