APIs are the lifeblood of modern software systems. They enable organisations to reach across technologies and their users, rapidly exposing systems and services to new customers.
APIs furthermore allow organisations to innovate rapidly, while allowing other businesses to incorporate their tech. Developers can use APIs to assemble existing building blocks rather than re-invent the wheel for each new piece of functionality. APIs therefore future proof businesses, making it far easier to incorporate new and potentially disruptive technologies.
However, this flexibility for developers and business often creates massive complexity for testers. This complexity is set out below, calling for a systematic and automated approach to rigorous API testing. A practical guide to achieving this approach is set out in full in the latest Curiosity-Parasoft eBook.
API Testing Complexity
The APIs that organisations rely on today often go undertested, exposing business-critical systems to damaging defects. A primary reason for this undertesting is the use of traditional testing techniques that are no match for the complexity of API chains.
This complexity stems from the numerous factors that must be reflected in API tests. Consider the combinations of data that API tests must reflect. The message data required to hit just one endpoint must contain multiple data variables. The data values might include:
-
User inputted data, such as the information users enter into a website or application, as well as the unique decisions they make when exercising the system’s logic.
-
User generated data. This is the machine data that is generated by the user activity. It includes content-type, session IDs, authentication-headers, user-agents, and more.
The data set needed for rigorous API testing will therefore contain a vast number of data combinations. These combinations must reflect the full range of valid and invalid values, both user and machine generated. That typically equates to thousands or millions of distinct combinations.
However, API testing cannot stop there. Rigorous API testing also requires tests that “cover” the full range of logically distinct journeys through API calls. In other words, it must test the combinations of API method or action that can transform data on its way to given endpoints.
Figure 1: API testing complexity.
It gets more complicated yet. APIs do not exist in isolation as they send data across chains of interrelated components. Any API test is therefore “end-to-end” in some sense and rigorous API testing must test the combinations of actions and methods that exist across multiple APIs.
Rigorous API testing therefore requires a set of data that reflects the full range of user inputted data, as well as machine generated data. It must further reflect the full range of combined actions that can be performed on that data as it passes through multiple APIs.
This leads to a giant number of possible test cases. In the simplified example shown in Figure 1, testing a chain of API calls must pick from:
-
1000 distinct combinations of data that a user can enter;
-
1000 distinct combinations of machine data that user activity can generate;
-
1000 logically distinct journeys through the actions of connected-up APIs.
That already leads to 1000 x 1000 x 1000 combinations – or one billion possible test cases to choose from. The first challenge for API testing then is deciding which of these possible to tests to execute, as there is no time to execute every API test in-sprint.
API Undertesting: Test Design gets Lost in the Labyrinth
Testing APIs rigorously in-sprint forces testers to reduce the number of test cases that need to be executed, while not compromising API test coverage. This requires a systematic and automated approach, capable of first identifying the vast number of distinct combinations involved across chains of APIs, and then selecting the most critical or high-risk.
The challenge is that API testing often relies on traditional testing techniques that are no match for API complexity. The unsystematic, overly manual techniques can neither identify nor execute API tests with sufficient coverage in-sprint.
These testing techniques undermine API testing speed, rigour and reliability at numerous points:
-
API test case design: Testers still frequently attempt to identify API tests manually, relying on incomplete documentation and highly technical specifications. However, there are more combinations than any one person can possibly hold in their head, let alone formulate one-by-one in a test management tool. The result is unacceptably low coverage, executing just a fraction of the distinct combinations contained across API chains.
-
Hard-to-Define Expected results: Testers often also second guess how APIs should respond to certain Requests. These Requests are designed for computer consumption, and the associated Responses are therefore often hard to define. Guessing expected results for thousands of possible tests in undermines the reliability of API testing.
-
Incomplete test data: The rich variety of data needed for rigorous API testing is rarely available readily to testers. This is because they are still typically provided with copies of production data. This production data contains just a fraction of the combinations needed for rigorous API testing. It is highly repetitious, having been generated by users who usually behave as the system intends. The data is furthermore drawn from past user behaviour and cannot therefore test new functionality.
-
Slow and erroneous test creation: Test teams are therefore forced to find or manually create many of the data combinations needed for rigorous API testing. However, the data is highly complex as it must link across multiple systems. Manual data creation frequently produces invalid data sets, undermining the reliability and stability of automated API testing.
-
Unrepresentative Environments: Test execution furthermore requires access to the end-to-end components called by APIs, including both in-house and third-party applications. These components might be unavailable for a variety of reasons. They might be unfinished or in use by another test team, while a third-party might not provide a sandbox for testing.
Many organisations rightly use virtualization to simulate these unavailable components. However, this returns to the challenge of creating virtual data, as well as defining expected Response for each Request. Testers are often forced to script complex behaviours manually, and inaccurate Request-Response pairings undermine testing accuracy.
Traditional testing techniques are too ad hoc and manual to test APIs rigorously and efficiently. Overcoming the complexity of modern systems instead requires an approach to testing that is automated and systematic.
Model-Based Testing: Navigating the Labyrinth of API Call Chains
Fortunately, the very factors that lead to its complexity make API testing perfect for the combination of Model-Based Testing and Test Data Management. Such an approach is set out in full in the latest Curiosity-Parasasoft eBook. The eBook also demonstrates how integrating virtualization facilitates rigorous and automated API testing in-sprint.
The model-based approach introduces measurability and structure at every stage of the API testing lifecycle, following the steps set out below.
- Test Case Generation
Model-Based Testing first enables automated and systematic test case design, applying mathematical algorithms to create optimised API tests.
Flowchart modelling allows QA teams to break down the logic of API calls into digestible chunks, modelling each decision gate and equivalence class of data. This visual modelling already works to overcome the potentially overwhelming complexity of APIs, creating models that reflect the full range of journeys through them.
The integration between Test Modeller, Parasoft Virtualize and SOAtest furthermore automates this modelling process. It builds initial models from imported service definitions and recorded message traffic, making model-based API testing possible within tight iterations.
Using re-usable subflows then makes it possible to overcome the complexity of testing API call chains. Defining end-to-end test scenarios is as simple as dragging, dropping, and assembling the models:
Figure 2 – automated API test case generation for joined-up components.
The mathematical precision of the models enables automated test design, using mathematical “coverage” algorithms to identify the logically distinct journeys through API call chains. This is shown in Figure 2 above.
This systematic test generation reduces the total number of tests without compromising logical test coverage. It reduces the total number of API tests while still testing every logically distinct combination of data and action.
A complete set of API tests can thereby be executed in-sprint. Executing the optimised API tests is furthermore another automated process. QA teams simply need to hit the “play” button shown in Figure 2, selecting a pre-defined process for exporting tests to Parasoft SOAtest.
-
Expected Results and Test Data Built-In
Model-Based generation is not only faster and more rigorous than manual API test design, it also overcomes the challenge of formulating expected results and data. Both are included at the model level, generating them systematically alongside the rigorous tests.
Expected results are simply the end blocks in the connected-up models. If testing the validation of Request data, for instance, the model end blocks specify when data should be accepted or rejected.
Test Modeller furthermore provides a variety of techniques creating complete data at the same time as rigorous API tests.
One approach specifies data values at the model level. This defines variables and values for each relevant block in the model. Data definition either provides a static value for each block, or selects from over 500 combinable data generation functions in an intuitive Data Editor:
Figure 3 – Over 500 combinable data functions generate data dynamically for every API test.
Dynamic data definition generates a rich variety of API test data, working to maximise test coverage. The integration between Test Modeller and Test Data Automation additionally enables repeatable Test Data Management processes to be specified at the model level.
This embeds a full range of test data utilities within automated API test generation, including data look-ups, subsetting, masking, generation, and more:
Figure 4 – The test data catalogue incorporates a full range of TDM utilities
within standard test automation.
The test data functions and processes defined at the model level resolve “just in time” during test creation. This compiles up-to-date, coherent data sets that match each and every API test.
Synthetic data generation furthermore creates the combinations not found among existing sources, including the unexpected scenarios and negative results needed for rigorous API testing. QA teams can thereby avoid slow and error-prone data allocation, hitting every scenario needed for rigorous API testing.
-
On demand test environments
Model-Based Generation furthermore creates the data needed for accurate virtualization, providing on demand environments in which to execute every test.
The integration between Parasoft Virtualize and Test Data Automation generates a “covered” set of RR pairs, creating accurate Responses for every possible Request.
Test Data Automation “explodes” the coverage a foundational data set created by Parasoft Virtualize. This foundational data is built rapidly from recordings and imported service descriptions, while Virtualize provides a visual and intuitive toolset for defining data for complex behaviours:
Figure 5 – A visual builder takes the complexity out of defining virtual data.
The process used to generate and optimise virtual data becomes re-usable within Test Modeller, and are available in the Test Data catalogue shown in Figure 4. QA teams can therefore define virtualization at the model level, simultaneously generating automated API tests and the virtual data needed to execute them.
Parasoft Virtualize furthermore empowers testers themselves to deploy virtual assets from an easy-to-use visual interface. This spins up environments on demand, enabling continuous API testing without cross-team or upstream constraints.
Continuous API Testing
This integrated approach ties everything needed for rigorous API testing to central models. QA teams can create everything they need for API testing automatically and on demand, using easy-to-maintain flowcharts to generate:
-
Automated API tests and data, optimised to “cover” every distinct journey through complex chains of APIs;
-
The expected results needed to validate reliably whether the API tests have passed or failed;
-
The virtual data needed for comprehensive virtualization, providing realistic test environments that can be deployed on demand.
Don’t let the APIs that your business relies on go under-tested. Download the latest Curiosity-Parasoft eBook for practical guide to implementing the approach set out in this article – start testing your APIs rigorously today!