Register for the next Curiosity webinar – To Open Banking and Beyond: Developing APIs that are Resilient to Every New Initiative.
Huw Price’s article last week highlighted how shifts in the nature of systems under test have brought fast-growing testing complexity, and how this shows little sign of slowing down.
One far-reaching shift has been from monolithic systems to microservices architecture. This has produced great flexibility for developers, who today can quickly add or remove discrete discrete units of a system, rapidly glueing together re-usable software building block. The glue then sometimes becomes the bulk of the IP in often sprawling applications.
For businesses, this flexibility enables rapid innovation, quickly expanding into new technologies and their users. Meanwhile, APIs and microservices architecture enable organisations to react quickly to maintain a competitive edge in the face of disruption.
Creating easily accessible interfaces and greater interoperability can further make it easier to comply with changing legislation and shifting user expectations regarding their data.
Initiatives like Open Banking require easy access to data collected created in one application or system. This reflects legislative requirements like the Right to Data Portability, while the EU’s Payment Services Directive (PS2D) has already led banks to release APIs for push payment and data aggregation services.
Opportunity or (negative) risk? Rigorous testing must match the speed of innovation
A shift to APIs and microservices therefore allows developers to innovate, and there’s competitive opportunity and legislative requirements for businesses to consider. But what does the move from monoliths mean for testing and QA? In one word: complexity.
Each new component added to a system brings its own world of logic, which is added to a melting pot of in-house and third party components. Each is connected by a web of inputs and outputs, creating a vast and fast-shifting maze of paths through the system logic:
How many routes through combined APIs? A dependency created from 100,000
lines of C# code. Now connect that one simple system to another, and another,
There might be thousands of possible combinations of methods across combined API calls, each of which might process thousands of unique data combinations in production. Testing a linked-up set of APIs further requires integrated data that chains consistently across numerous components:
There might be thousands of logically distinct paths through combined API calls, each
of which requires consistent test data for integrated tests.
Meanwhile, efficient test execution requires realistic test environments that make the myriad of integrated components available on demand.
API testing therefore reckon with more possible test cases than most testing has known previously. QA must conjure up more complex data than ever, and is faced with greater risk of components being unfinished or unavailable to testing. Yet, tests face less time than ever to create the API tests, data and environments. Iterative development and the complexity of the systems it creates today is a perfect storm for testing to fall behind delivery cycles.
However, the stakes are high, and testing simply should not release systems before the APIs are tested fully. APIs today often bus personal and financial information between systems, especially when initiatives like Open Banking are involved. Releasing buggy APIs increases vulnerability to successful attack and the risk of data leakage. For organisations, that spells massive fines, brand damage and loss of trust; for customers it can be even more devastating.
The year of the model?
More than ever then, QA requires a systematic approach to test design, capable of identifying an executable number of tests from a pool of thousands or millions. Given the speed with which new components are added, this approach must furthermore be automated, continuously creating tests and data needed in short iterations.
Model-based test design has long excelled at overcoming vast system complexity, applying automated algorithms to generate test cases from graphical representations of the system under test. Optimising the test cases for coverage further creates an executable set tests, testing complex systems in short iterations.
The right tools and techniques furthermore make the formal modelling process quick and simple, even when facing a myriad of complex components and APIs. Accelerators in Test Modeller, for example, build flowchart models from imported service definitions and recorded message traffic.
The re-usable models of individual APIs can then be assembled visually, using a drag-and-drop approach to model integrated APIs.
This approach works to ensure the rigour of API testing. Though the thousands of paths through the integrated APIs might be too complex for manual test design, they are no match for automated coverage algorithms:
Model-Based Test generation creates API tests and data that consistently link
across integrated components.
Tying these test generation techniques to automated test data allocation additionally assigns unique and consistent data combinations to “paths” generated from the model. In other words, integrating model-based test design with test data automation is the perfect match for testing combined APIs.
Test Modeller then uses re-usable process automation to push the integrated tests and data to existing API test automation frameworks, including SOAtest, API Fortress, and more.
The value of model-based test automation is fully realised when a system changes. Test teams only need to update the relevant flowcharts, with the changes rippling across all modelled and integrated processes. This avoids the time wasted checking and updating test scripts and data, which instead become throwaway assets. QA can quickly re-generate a new set of optimised tests and data after each change, testing complex and combined APIs in short iterations.
See for yourself
Modelling takes the complexity out of testing complex systems, while allowing test teams to build on existing tools and techniques to test APIs. The application of automated coverage algorithms furthermore ensures the rigour of API tests, while making it possible to re-generate a new set of tests each time the system changes.
QA does not therefore present a barrier to innovation in this approach, and can match the speed with which developers slot in new system components. The shift to APIs and microservices therefore offers the opportunities associated with rapid development, without the avoidable negative risk of releasing buggy APIs.
To see this method in practice, watch the Curiosity webinar: To Open Banking and Beyond: Developing APIs that are Resilient to Every New Initiative.
Introducing “Model-Driven Development”
Behaviour-Driven Development (BDD) emerged in 2006 , partly in response to perennial test and...
Second decade of a new millennium: observations and predictions for QA
The past decade has seen me sell one start-up, join a multi-national, and co-found my latest...
Agile Test Automation Frameworks Using Page Object Models
Software development has been revolutionized by new methodologies and practices. The software...