Performance Testing Complexity
Performance testing must extend beyond executing a high-volume of repetitious tests and data. Realistic performance tests must cover the full range of user and machine data that can be inputted or generated during production, as well as every distinct combination of API action and database call that could exercise that data. The variety of real-world Workloads must also be reflected accurately in Performance Tests, defining complex parameters that cover the full range of performance requirements systematically.
This complexity grows exponentially as system components are chained together, leading to more tests than can be creatd and executed within an iteration. Rigorous performance tests must then cover every distinct combination of call and data involved across APIs. Each combination could be fired off in a complex chains of API calls, each of must be tested against.
Rigorous Performance Testing for Multi-Tier Architecture
Model-Based Testing cuts through the noise of performance testing across multi-tier architecture, automatically generating the smallest set of tests and data that “cover” every distinct combination of data and API call. With Test Modeller, the automated tests can be executed across a range of frameworks, enabling rigorous functional and performance testing from centrally maintained models:
Taurus Performance Testing
Watch the example of web testing an eCommerce stores’ UI and API layer to see how complete Taurus Performance Testing can rigorous and automated with Test Modeller. You will see how:
-
Complete functional models of the system under test can be built rapidly in-sprint, mapping the logical journeys users can through a complex system’s multi-tier architecture.
-
Test data to simulate every real-world performance scenario can be defined using over 500 easy-to-use functions that resolve “just in time” during test execution.
-
The complete test data can be rolled-up into complete JSON messages, defined and compiled using a simple, visual builder.
-
Parameterised tests are built using a simple “fill-in-the-blanks” methodology, where new actions and objects can be made available from a range of frameworks like Taurus, LoadRunner and JMeter.
-
Automated coverage algorithms generate the smallest set of automated performance tests needed to cover every logically distinct combination of test data and API call.
-
Performance tests are executed automatically using VIP Robotic Process Automation, with run results updated in Test Modeller, BlazeMeter, and across Continuous Deliver and ALM tools.
-
Subflows make testing complex chains of API calls easy – simply drag-and-drop blocks together to generate automated performance tests and complete test data.
-
Test maintenance is replaced by updating easy-to-use, central models, re-generating a new set of automated tests and data.