The Curiosity Blog

Introducing “Functional Performance Testing” Part 1

Written by Thomas Pryce | 18 March 2019 14:24:48 Z

This is Part 1/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier architecture, and across the testing pyramid. The series out how “Single Pane of Glass” automation generates rigorous tests and data to validate both performance and functionality, all automated and maintained from the same central models.

Introducing “Functional Performance Testing” Part 1: Performance Testing Complexity

Click here to download the whole series as an eBook.

 

 

Introducing “Functional Performance Testing”

Testing complex applications rigorously for performance traditionally involves executing a high number of repetitious tests, with low variety data. However, the goal of performance testing is to exert realistic behaviour, reflecting the full range of scenarios that could occur in production, at various levels of usage.

Performance testing must therefore account for the range of logic and data reflected in a system’s multi-tier architecture. The tests must cover the full range of data a user can input, as well as the full range of machine data, like messages, that they could generate in production. The same tests must furthermore account for the combinations of API and database calls and actions that can transform that data.

This series considers methods to overcome the complexity of testing across multi-tier architecture. It sets out an approach to testing complex systems for both functionality and performance, all while working from the same centrally maintained models. The goal is to create a set of tests that are not only rigorous, but can be executed within an iteration,

The series focuses for brevity on Load testing across the UI and API layer, arguing for a Model-Based, coverage-driven and data-centric approach. Along the way, it makes the case for introducing principles of functional testing to performance testing. It is split into three parts:

  1. Part One considers the complexity of testing the performance of multi-tiered systems, setting out the key variables that rigorous tests must account for. It makes the case for “Functional Performance Testing”, arguing that the principles of functional testing can help overcome the testing complexity.
  2. Part Two sets out how a functional approach to testing across multi-tier architecture works in practice. It demonstrates how Model-Based techniques are capable of rigorously testing the logic involved across complex systems, using flowcharts to generate functional test cases and data automatically.
  3. Part Three shows how the same models of the system under test can be used to generate either functional or Load tests. It provides an example of how “Single Pane of Glass” automation can test across the testing pyramid, concluding with a discussion of the value of this approach.

More variety, not just more tests: the case for introducing principles of functional testing to performance

Performance testing is more complex than sometimes thought, and numerous factors must be accounted for when creating effective performance tests. This quickly leads to a vast number of possible tests to choose from, more than can be feasibly executed within an iteration.

Realistic and rigorous Load tests, for instance, must reflect the full range of data values that could be inputted into a system during production. Each combination of data that can be inputted by a user into a UI might also feature in an API request, along with the wide-range of machine data users might generate.

Performance testing cannot therefore focus on a narrow range of data, repeated at high-volumes. Such testing is unlikely to touch the unexpected or negative scenarios that might be exercised in production, leaving a system’s performance untested against real-world conditions.

Testing even one API in turn involves vast complexity, and that’s just the data. The requests executed during QA must further reflect the full range of actions or calls contained in any one API. Each possible combination of action and data value can therefore be a test.

Load tests must additionally be parameterized to reflect production conditions, specifying a range of concurrency, load time, ramp up time, and more. This testing complexity is already massive, but it grows exponentially as APIs are joined together into a system. Now you have even more possible combinations of data values, each of which can be fired off in complex chains of API calls.

To summarise, functional testing across UIs and APIs must account for:

  1. The full range of values that a user can input during production, both valid and negative.
  2. The full range of machine data that could be generated by users in production, via UIs or APIs. This includes content-type, session IDs, authentication-headers, user-agents, and more.
  3. The full range of methods or actions that API Calls exercise on the data.
  4. The combinations of all of the above, joined together into chains of API calls.

Load testing that same multi-tier architecture must additionally account for:

  1. Diverse parameters to simulate the range of Workloads that a system might be subjected to in production.

The result is more combinations of data and action than could be exercised during QA, but could be exercised in production. Realistic and rigorous Load testing must instead aim to test the full variety of logically distinct scenarios that might be exercised in production. This is where the principles from functional testing can help.

Read Part two of this series to find out how a Model-Based approach can apply these principles in practice, generating the smallest set of test cases and data needed to cover the full range of scenarios involved across multi-tier architecture.

[Image: Pixabay]