APIs provide business with the flexibility to innovate rapidly and extend their core offerings to new users. However, this flexibility brings massive complexity for testing. API testing therefore requires a methodology capable of matching the speed and variability of modern software delivery. This article discusses a model-based such approach, which is set out in full in the latest Curiosity-Parasoft eBook.
Understanding API testing complexity
Rigorous API testing must overcome massive complexity, reckoning with a vast number of possible test cases.
Firstly, the message data needed to reach endpoints must “cover” every distinct data combination of values. That includes data values entered by users, as well as the unique actions they perform against a system. It also includes machine data generated by user activity, for instance content-type and session IDs.
API tests must furthermore account for the journeys through which the data can flow through APIs. They must cover the combinations of API actions and methods that can transform data on its way to reach certain endpoints.
However, APIs do not exist in isolation. They by definition connect multiple systems or components, and every test is therefore end-to-end test in some sense. A rigorous set of API tests must therefore account for the vast number of combined actions or methods that can transform data as it flows through connected-up APIs.
An unrealistically simplified example would include 1000 combinations of user inputted data, 1000 different combinations of machine-generated data, and 1000 distinct journeys through the combined actions:
Figure 1: Rigorous API Testing must account for a range of factors
That’s already 1 billion combinations, each of which is a candidate for an API test. Rigorous API testing must therefore select a number of test cases that can be executed in-sprint, while still retaining sufficient API test coverage.
Too many tests, not enough time
However, the testing techniques used in API testing are often too manual and unsystematic for rigorous API testing. Business-critical APIs risk going under-tested at each point of the testing lifecycle:
Firstly, creating API tests one-by-one in test tools or through scripting is too slow and ad hoc to hit even a fraction of the possible combinations.
Expected results are also hard to define from service definitions and requirements. Second guessing whether a Response is ‘correct’ undermines the reliability of API testing.
Test data then lacks the majority of combinations needed for rigorous API testing. Low-variety copies of production data focus on expected scenarios that have occurred in the past. They lack outliers and negative combinations, as well as data for testing unreleased functionality.
When it comes to API test execution, there is often not access to in-house and third-party systems. Components might be unfinished or in use by another team, or a third-party might not provide sandboxes for testing. Environmental constraints therefore further undermine API testing agility.
Testing complex chains of APIs instead requires an integrated and automated approach. API testers must be able to identify the smallest set of API tests needed for API testing rigour, systematically creating the test data and environments needed to execute them.
Model-Based API Testing: Overcoming the complexity of API call chains
The latest Curiosity-Parasoft eBook offers a practical guide for achieving this integrated approach. It sets out how testers can generate everything needed for rigorous API testing from easy-to-use models. In this approach:
Model-based test generation creates API tests that “cover” every distinct combination of data and method involved across chains of APIs. This applies mathematical algorithms to mathematically precise models. The models are built quickly from imported service definitions and message recordings. Dragging-and-dropping the re-usable flowcharts assembles end-to-end tests for complex chains of APIs, enabling rigorous testing within short iterations.
Accurate test data and expected results are generated simultaneously for every test. Expected results are simply the end blocks in the flowcharts, and Test Modeller furthermore finds or makes data “just in time” for every test it generates. API testers can select a comprehensive range of data generation functions and repeatable Test Data Management (TDM) processes at the model level. These resolve “just in time” during test generation, compiling coherent data sets that are tailor-made for each end-to-end test.
Virtual data generation produces the Request-Response pairs needed to simulate missing or unavailable components. Virtual data generation creates accurate Responses for every possible request. This repeatable TDM process is also called during test generation or execution, ensuring that each test generated from the central models is equipped with accurate test data and environments.
With this integrated approach, QA teams can themselves generate everything needed for rigorous API testing. Maintaining central flowcharts keeps the tests, data and virtual services aligned, testing complex chains of APIs within short iterations.
Download the latest Curiosity-Parasoft eBook to discover how model-based API testing can maximise the speed and rigour of your API testing.
Curiosity and Windocks announce Containerised Test Data Automation
Curiosity Software Ireland, creators of Test Data Automation, and Windocks, on demand database...
Removing Testing Bottlenecks in CI/CD and DevOps
Curiosity often discuss barriers to “in-sprint testing”, focusing on techniques for reliably...
Curiosity Software look ahead to 5 events in their 2022 programme
The live webinars and conference talks target pressing issues in software requirements, test...