The Curiosity Blog

The broken promise of test automation

Written by Thomas Pryce | 27 October 2020 14:37:21 Z

Remember when test automation was being peddled as a silver bullet for testing bugbears? Of course, those vendors really meant test execution automation. Automating test execution was going to increase coverage, minimise testing time, and overall reduce the amount of money being spent on testing.  It would even butter your toast in the morning.

Well, those days are long gone. Organisations have now reckoned with implementing test automation and have grown wise to its challenges. They’ve discovered that automating one process within testing leaves many others untouched, while introducing many challenges of its own. As an industry, we’ve been left questioning.

Below, I’ll consider some of these questions, before considering how some of the ‘intelligent’ solutions being proposed today might lead us down a similar path to the ‘magic’ of test automation. I’ll then set out some questions that we need to ask ourselves before adopting our next best solution, indicating some answers. To see these answers (both technologies and techniques) in practice, watch The broken promise of test automation: why are we still hand-cranking tests?

Some questions to ask ourselves

Having automated a % of test execution, organisations find themselves left with many questions to answer. I won’t bombard you with all the questions – and nor do I know them all. However, the following 7 should illustrate my point. Most are in fact old challenges, often exacerbated by the introduction of automated test execution:

1.    How can my team possibly create enough tests (cases or scripts) before the next release?

2.    Do I know what has been impacted by recent system changes, and what needs testing as a result?

3.    How can I check and update existing tests before the next release?

4.    Am I over-testing – have any of my tests become redundant?

5.    How can I prioritise testing to deliver meaningful results before the next release?

6.    How am I measuring testing and what’s my definition of ‘done’?

7.    Where on earth can I get all the data needed by my data-hungry tests?

 

These questions have been further compounded by the fact that systems are now more complex than ever and are changing faster than ever before. We have more logic to test in less time and teams now face an impossibly large test creation and maintenance bill. Meanwhile, we lack the tools or artefacts needed to identify what has changed across vastly complex systems. We therefore struggle to prioritise our testing before each release.

A light at the end of the tunnel?

Of course, there’s a new kid on the block. AI, ML or simply ‘automation’ with ‘intelligent’ or ‘smart’ stamped in front. These autonomous processes are going to automate whatever’s been slowing you down, magically ‘knowing’ what to test before each release and executing those tests for you. They won’t just butter your toast – they’ll put jam on it and pour your tea too.

As with all things entering the testing domain, there are two core questions that we must ask ourselves:

1.    Are we building tests that truly matter for the release?

2.    Are we optimizing our test suites in light of the changing application?

With the paradigm shift towards AI and ML, there are many further questions that must be addressed. Again, the following is just a sample:

1.    What is the model being tested? Lots of technologies identify things we could test and identify tools to run tests. For instance, they identify and maintain UI identifiers. However, how do they know what needs testing and how are we measuring testing?

2.    What data is informing these decisions? Are our toolchains integrated well enough to feed in data to prioritise and build effective tests? If using technologies that analyse patterns in existing data, how good is this data? Do we understand our systems well enough to judge this, or are we risking a high-speed “garbage in, garbage out” scenario built on black boxes?

3.    Are we delivering meaningful results to developers and the business? We can discover X thousand bugs, but can we convince developers that they are a true risk to the system? Do we know their risk, or is it simply something that didn’t quite match with our model? Are we furthermore equipping developers with the knowledge they need to fix these bugs quickly?

With the sudden proliferation of technologies in the ‘intelligent’ testing space, there is also the added uncertainty of knowing what’s really new and what’s been proven. There are old approaches that have been stamped ‘intelligent’ and then there are new approaches that have not been rigorously field tested.

There’s also the added uncertainty of knowing how to implement new approaches on top of the old. Lots of proposals for unlocking the value of AI/ML in testing focus on its promise, less on the hard thinking of how we can get there. This was frequently overlooked with test automation, and we quickly found ourselves with teams lacking the skills or time to implement it.

So, what next?

These questions are far-reaching and answering them will take some soul-searching by the testing industry. I would love to present the solution in this article, but then it would not be a solution – it would be one new proposal among many. Instead, we need to have an open conversation, mapping honestly where we are today. We must understand the most pressing problems we face, and where we need to go next.

On November 10th, I will be joining the ever insightful Daniel Howard, Senior Researcher at Bloor Research, to offer a contribution to this debate. The free webinar will consider the “broken promise of test automation”, discussing where we have arrived at with test execution automation and where we might go next. Faced with the impossibility of testing everything before each release, the webinar will return to the two questions referenced above:

1.    How can we build tests that truly matter for the release?

2.    How can we optimise our regression suites in light of changing applications?

The session will not be a pitch about how AI or ML will magically solve your problems. It will be an open and interactive discussion around techniques – new and existing – that can help us address the most pressing challenges in testing today. Our goal, through dialogue, will be to offer a plan for evolving more sustainable automation.

Together, we will map technologies equipped to make a real difference in how we test today. This will include:

1.    Automation that extends far either side of test execution, covering test creation, test data allocation, and test maintenance;

2.    Optimisation techniques for measuring and targeting testing before each release, avoiding over-testing while ensuring test coverage;

3.    Methods for capitalising on the proliferation of data that will be created as we integrate DevOps toolchains.

Some of these technologies might gravitate towards AI, ML or Expert Systems. However, all should be technologies that you can consider and start implementing tomorrow, building on current tools and techniques. Curiosity’s Director of Technology, James Walker, will further be on hand to give demos of the technologies identified in the interactive discussion. That way, you’ll know that they’re ‘real’ and not another silver bullet for testing.

The broken promise of test automation: why are we still hand-cranking tests? Now available on demand!