The Curiosity Blog

Just how complex is mobile testing?

Written by Thomas Pryce | 01 March 2023 15:33:16 Z

Welcome to Part 2/5 in our “Scalable Mobile Test Automation” series.

Part 1 set out the seismic rise in mobile use, arguing that testing strategies must prioritise mobile test automation alongside testing for desktop and laptop users.

This article considers why prioritising mobile testing might be a daunting prospect to teams currently struggling to maintain automation for web and desktop. It considers the mixed success of automated testing so far, before evaluating the additional strain that will be placed on testing by mobile. Part 3 will next consider why a new, automated and targeted approach is therefore needed for creating automated tests.

Want to read all five parts of the series now? Download Curiosity’s latest eBook, How to Scale Mobile Test Automation.

A daunting prospect for test automation teams?

At a time when test automation needs to prioritise mobile, the World Quality Report found that only 15-20% of tests are automated, including just 15-20% of regression tests [1]. It’s generally agreed that logically repetitive, recurring regression tests should be automated. That only 15-20% have been automated is not therefore for want of trying, but instead indicates how teams are struggling to scale automated testing.

These same teams now face the additional, uphill battle of maintaining tests for mobile. This will likely require the use of different test automation technologies, languages and libraries. Given that the average automation rates have hit just 15-20% to date, how can testing teams be expected to double-up their efforts, scaling automation for both native apps and web?

Mobile testing’s combinatorial explosion

If only it was as simple as doubling-up.

In addition to using different technologies, testing rigorously on mobile means testing a wide-range of additional permutations.

Imagine you have just 1 test case, in which the user journey is broadly similar for web and mobile. Testing that same logic on mobile must additional “cover” a wide-range of factors. These include:

  1. Which device is being used?
  2. Which mobile browser is being used?
  3. Which mobile operating system is being used?
  4. Do device features function properly on each phone and tablet? This might include gestures, cameras and file uploads, as well as mobile technologies like gyroscopes and accelerometers [2].
  5. Does the test pass under different localisation settings? [3]
  6. What happens under different network conditions and events, such as switching from data to WiFi or going offline?
  7. What happens during an interruption, such as a push notification, a system update, or running out of battery? [4]

In addition to all of this, native apps must further pass checks performed by different app stores, adding even more permutations to test.

These factors create a “combinatorial explosion” of possible mobile logic to test. Consider a simplified example, where you want to execute 1 test case on 200 devices and 100 browser/OS combinations. You further want to test 7 gestures, combined in different orders, in 7 locations, with 3 possible network events, the option of interruption or not, and against two app store guidelines.

Even if we leave out the gestures, that combines as follows:

1 × 200 × 100 × 7 × 3 × 2 × 2

This combinatorial explosion creates 1,680,000 permutations. Adding gestures risks exploding that number to over 1 trillion.

Now let’s say you have over existing 100 test cases that you want to execute on mobile; your total number of permutations now exceeds 160 million, far more than you could ever expect to execute manually or using mobile test automation.

scaling mobile test automation

Scalable mobile test automation must instead be targeted and optimised. This requires a new approach to automated test creation, as otherwise history shows us that automated testing will struggle to get beyond the 15-20% mark.

The next article in this series will identify five key lessons to learn from this history of test automation. These will then be inverted by Part 4 into four principles for scalable mobile test automation, before Part 5 sets out a model-based approach to generating rigorous mobile tests at scale.

Want to read all five parts of the series? Download How to Scale Mobile Test Automation.

References:

[1] Capgemini, Sogeti (2022), The World Quality Report 2021-22, P. 23. Retrieved from https://www.capgemini.com/insights/research-library/world-quality-report-wqr-2021-22/ on December 12th 2022.

[2] For a good discussion of some of the device features offered by mobile apps and websites, see William Craig (WebFX: 2022), “Native App vs. Mobile Web App: A Quick Comparison”. Retrieved from https://www.webfx.com/blog/web-design/native-app-vs-mobile-web-app-comparison on December 12th 2022; Joe Strangrone (mrc: 2012), “6 “native” features you can use with mobile web apps”. Retrieved from https://www.mrc-productivity.com/blog/2012/01/6-%E2%80%9Cnative%E2%80%9D-features-you-can-use-with-mobile-web-apps/ on December 12th 2022.

[3] For a good discussion of localization testing, see Thomas Hamilton (GURU99: 2022), “What is Localization Testing? Example Test Cases & Checklist”. Retrieved from https://www.guru99.com/localization-testing.html on December 12th 2022.

[4] For a good discussion of interrupt testing, see Thomas Hamilton (GURU99: 2022), “Interrupt Testing in Mobile Application”. Retrieved from https://www.guru99.com/interrupt-testing.html on December 12th 2022.