11 min read

Second decade of a new millennium: observations and predictions for QA

Featured Image

The past decade has seen me sell one start-up, join a multi-national, and co-found my latest venture: Curiosity Software Ireland. I am grateful for the opportunities that the 2010s presented for collaboration with old friends, while working with new organisations to solve challenges in software delivery.

As with any decade, the “teenies” brought numerous shifts in testing and development. These developments have enabled organisations to deliver ever-more powerful systems, at even faster speeds. We today produce software that just two decades ago was confined to sci-fi imagination.

Evolutions and revolutions in processes, tools, and teams have offered increasing flexibility and facilitated these new opportunities. However, the same changes have created new problems, each requiring new thinking. Below, I discuss six broad observations regarding some challenges that emerged last decade, commenting on how the QA community has moved to meet them.

The list is far from exhaustive. I’ve deliberately avoided some of the broadest trends like partial shifts away from Waterfall, and the rise of CI/CD, DevOps, BDD, and more. You can find plenty of solid research about these trends online.

I hope, instead, that the below provides some new food for thought and reason to pause as we enter a new decade, which will inevitably bring new challenges and opportunities for further change. Where relevant, I’ve also included some links for you to see some of the technologies that I’ve helped produce to meet the challenges discussed. I’ve furthermore concluded with some rather speculative predictions.

1.   Some testing got a bit more automated (some things a bit more manual)

Just four years ago, I was speaking with organisations about why they should automate test execution. Today, organisations either have test automation frameworks in place and want support increasing their adoption, or otherwise have automation on their immediate horizon.

Automating execution made a whole world of sense, and met an immediate need created by the shift to iterative development. Releasing in short iterations has enabled faster innovation, creating increasingly complex systems in shorter periods of time. However, this created the prefect storm for a greater number of test cases that need executing, with a shorter time allowed to execute them. Manual test execution was simply too slow for this, whereas automation could perform the same tests faster and in parallel:

Yet, automating test execution introduced a raft of additional processes into the mix. Three tasks in particular prevent organisations today from achieving sufficient levels of automated test execution: manual test creation, test script, maintenance, and the allocation of data for data-hungry automation frameworks:

Fresh tooling has arisen to meet these challenges. These tools have dethroned test automation, which itself was previously pitched as a silver bullet for QA bugbears. Organisations are turning increasingly to automated test creation solutions, and AI technologies promise to build on these in supporting reactive test script maintenance. These techniques are proving particularly valuable when they tie test data to test creation and maintenance.

Any automated test generation must, however, be capable of creating tests for bespoke systems, and should be able to optimise the tests for maximum coverage. This is why few organisations last decade stuck purely with record/playback techniques and “low code” test design. An increasing number are turning to model-based approaches, which can integrate with bespoke frameworks to test custom system logic:

2.   The rise of the “TestDev” and Software Development Engineer in Test (SDET)

The move to test automation has wrought an associated change in testing teams. Most automation frameworks today are coded, requiring scripting to define tests. As Angie Jones comments, this requires individuals who “possess the skillset of a developer and the mindset of a tester.” However, existing test teams at many organisations do not possess these requisite engineering skills. There have been broadly two responses to this.

The rise of the SDET

Firstly, a new job title has emerged: Software Development Engineering in Test, or SDET. These new team members have development backgrounds and the coding skills needed to build automation frameworks.

Yet, recruiting SDETs is not a complete solution to test automation adoption. For starters, skilled automation engineers are in high demand, while most developers prefer working in development. Automation engineers are often therefore unavailable or are prohibitively expensive to hire.

Recruiting a small core of skilled engineers is furthermore incapable of achieving the levels of automated test execution that organisations desire. It ignores the majority of test teams, most of which possess valuable knowledge regarding the system under test. These same teams furthermore possess the requisite “mindset of a tester”.

Testers become engineers?

Many organisations today therefore adopt a second approach, expecting testers to learn to work directly with automated test scripts. This itself is challenging, particularly as QA teams cannot simply down tools, dropping existing processes and learning to code their tests from scratch. They must instead be able to weave their budding automation journey into existing test cycles.

The past decade has therefore seen the rise of de-skilled approaches like scanners and recorders, as well as a proliferation in “teach yourself” automation education. The challenge is that the automation must be simple enough to be learned alongside a day job, but complex enough to test custom system logic.

This balance is hard to find, and organisations today are finding that “out-of-the-box” automation libraries can only get them so far. Often, they find that system logic like unsupported elements require a return to manual test execution, with automation covering a % of their system.

Enterprise-wide adoption in the 2020s?

Enterprise-wide adoption of test automation should instead combine both approaches, leveraging the expertise of existing test teams alongside new skills brought by SDETs.

This allows automated testing of complex systems. “De-skilled” test creation can test any automatable system, if it is capable of leveraging the custom code created by SDETs. This small core of engineers can in turn focus on creating new code to test custom system logic, constantly making this code re-usable by broader test teams:

Re-usability is therefore king in automation adoption, and furthermore reduces time spent on repetitious scripting and maintenance. Meanwhile, it builds on the skills and subject matter expertise that exists within organisations today.

3.   Open Source is more vibrant as ever

There was an influx of Open Source test tooling throughout the last decade. This might be a consequence, partly, of this influx of developers into QA. Dev has a long tradition of community-driven projects, applying engineering know-how to build free solutions to meet pressing problems. Some of these solutions lead innovation, becoming the most popular tooling and the basis for numerous commercial offerings. Meanwhile, community events and friendships built through collaboration keep the innovation and resources rolling continuously on.

Open Source Automation

It is perhaps no surprise then that many of the most popular automation frameworks are Open Source. For functional web testing, this includes Watir and Selenium, as well as the community-built bindings for various languages. Appium is likewise a leader in mobile automation, while JMeter and Taurus are popular for low-level performance testing. Testers working with applications can similarly enjoy AutoIt and Winium. New frameworks continue to emerge, including the increasingly popular Cypress and Selenide.

DevOps Toolchains and New Types of System Under Test

The adoption of cutting-edge OS technologies has also affected tools auxiliary to testing and has furthermore altered the nature of systems under test. Organisations large and small are today using Jenkins for CI/CD while testers are checking automation code into Git repositories. Systems under test today might also be built, for example, on cutting-edge database and data processing tools, including Solr, Kafka, and Hadoop.

A Vibrant Testing Community.

In parallel to the rise of community-built tooling, the dynamic of the QA community has also evolved in the past decade. It is in some ways similar to the development community. Community-organised events have grown, with local meetups and user groups organised worldwide. These events combine online and in-person opportunities to share skills and nurture our enthusiasm, and I take my hat off to hardworking organisations like Ministry of Testing and Vivit, as well as QA evangelists like Joe Colantonio and Jonathon Wright.

4.   Innovation and complexity are showing no signs of slowing down

These new tools are enabling faster innovation that ever before, while iterative, parallel development is adding new functionality faster than ever. There has been a parallel shift from monolith systems to microservices, containers, and APIs. This flexibility enables developers to glue together building blocks faster than ever, but testers are then faced with more and more inputs and outputs to test.

Testers today must therefore capable of testing more, but are facing shorter timeframes in which to identify and execute tests. 10 years ago, I was helping test teams optimise test cases from hundreds or thousands to a several dozen. Today, QA face millions of possible paths through system logic, each of which could be a test requiring scripts and data. Meanwhile, sprawling technologies require a range of drivers and technologies, in addition to wide-reaching expertise.

Coordinated chaos or flexibility with structure?

This vast complexity has fed three related trends.

Firstly, testers are as wary as ever of vendor lock. If developers adopt a new technology, testers want to be able to adopt the best-of-breed technology with which to test that system.

Secondly, then, QA often today pick test tools based on their fit with the system under test, and this is particularly necessary for automation drivers.

Third, there is a desire to combine sprawling technologies in a unified approach, while still retaining the unique value of each tool. The past decade has therefore seen the rise of Robotic Process Automation to connect disparate DevOps technologies, as well as attempts to establish a “single pane of glass” approach. These approaches work best when one person can make a change in one tool, with that information rippling accurately across all associated technologies:RPA for DevOps

5.   Testing is increasingly decentralised (but is still a specialism!)

Another shift in team structure reflects a move away from central “Centres of Excellence” and a siloed test team managed by a dedicated test manager.

The adoption of agile approaches has emphasised cross-functional teams and testers often today sit alongside developers and BAs. “Shift left” approaches have furthermore demanded closer collaboration between those who design, develop and test systems. Meanwhile, test automation has increased overlap in testing and development skills.

So, have we moved beyond the old adage that those who create systems should not also test them? Or is “testing” as a discipline now lost in the primordial soup from which systems emerge from a loose collection of cross-functional teams?

Yes and no. The testing specialism might now be distributed across teams and organisations, but the specialist skillset remains distinct. The shift in team structure and delivery patterns reflect a drive to foster collaboration and communication, detecting bugs earlier and avoiding time-consuming rework. It is not meant to wholly collapse a distinction between system design, development and testing.

As ever-more complex systems continue to be delivered ever-faster, the need to test and facilitate testing is not going to go away. Just ask those engaged in the most testing activities in any cross-functional team. Often, that same individual was the primary tester in their previous role, and the one in a cross-functional team before that. They bring with them a distinct set of experience and skills, though not one confined today to a silo or centre.

6.    Compliance became everyone’s problem

No reflection on the past decade would be complete without mentioning shifts in data privacy.

Compliance was a concern for testing before the 2010s, with existing legislation like the UK Data Protection Act and the The Health Insurance Portability and Accountability Act (HIPAA) in the US. However, new legislation increased the scope of data privacy, while also significantly increasing the risk of non-compliance.

Meanwhile, regulators are showing their willingness to impose ever-higher punishments. The UK’s Information Commissioner’s Office (ICO), for example, closed last decade by imposing a record fine of £183 million, smashing their previous record of £500,000.

Generally speaking, new legislation like the EU General Data Protection Regulation makes it riskier than ever to use production data in less secure test. It furthermore gives individuals today more control over their data than ever before. The GDPR, for instance, allows EU data subjects to request the erasure of their data “without delay”, and they can also request a copy of their data in a format usable by them.

The increased rights of individuals presents a logistical nightmare for existing Test Data Management techniques. Many organisations still store data across a sprawling, poorly understood estate, as well as on tester’s local machines. These organisations often do not know where data is stored, and will therefore struggle to identify, copy and delete it on demand.

Concerns about data privacy are only going to increase as we rely ever-more on data-driven technologies. Regulation will therefore continue to move proactively and reactively to meet these concerns. Compliance, more than ever, will continue to be a key requirement in testing.

So, have we found the Millennium Bugs yet?

The last decade has therefore led significant achievements in testing efficiency and innovation. This will continue into the 2020s, and the three broad trends indicated by this article will continue:

1.    The complexity of the systems under test will continue to grow. Developers will deliver systems faster than ever, drawing on new technologies to build more complex systems than ever.

2.    Test process automation will increase. In the short term, this the focus on automated test design and maintenance, while process automation will continue to focus on automating rule-based tasks and keeping DevOps tooling in alignment.

3.    The role of “intelligence” and smart systems will increase across testing and development. This will grow as organisations begin to adopt now emergent technologies.

The 2020s: A decade of complexity, automation, and (maybe) intelligence?

These three trends are closely related. Firstly, the speed at which systems are developed and the introduction of intelligence will add rapidly to system complex. Software systems have long been more complex than any one person can understand. However, the ability to process “big data” rapidly and apply decision-making to data sets will soon mean that individual decisions made by systems will become beyond human comprehension. Put simply, there are going to be black boxes on an increasingly microscopic scale, feeding autonomous black boxes (systems) on the macro scale.

Secondly, this massive complexity will require automated test creation, in order to test the unprecedented number of decision gates sufficiently. This automation will, thirdly, require an increasing degree of intelligence.

Intelligence will be necessary to overcome the uncertainty created by autonomous and intelligent systems. Manually reverse-engineering and testing decisions made by smart systems will simply be too slow and unreliable.

Intelligent, automated test deign, by contrast, promises to continuously create and execute tests based on the latest live data. Executing these tests continuously will in turn work to better understand the microscopic workings of intelligent systems. Continuous experimentation will therefore reduce the number of “knowable unknowns”, and the negative risk associated with it.

The growing combination of automation and intelligence will fight fire with fire. Autonomous test design will draw rapidly on vast quantities of available data, in order to build tests for systems that intelligently process vast quantities of complex data.

Put simply, a smart approach is needed to meet the intelligence of systems under test, but this will likewise mean that testers pass off many decision-making processes to computers.

Now is the time to fix your information flow

So, what can organisations do today to prepare for these shifts in the next decade? Firstly, you must fix the information flow across technologies and teams.

Automation and AI have both rely on the quality of the information fed in. Otherwise, you get a “garbage in, garbage out” scenario, producing fundamentally broken tests with unreliable results.

Organisations should therefore continue on the current path of connecting existing systems, drawing arrows between DevOps tooling. They should focus on automating rule-based processes and increasing the accurate flow of information between tooling and teams. That way, they can collect more data and metadata, creating an “AI ready” data lake. This will in turn ensure the eventual value of intelligent technologies:

Thanks for reading! This list is far from complete, and I’d love to know your observations for the last decade and predictions for the next. Please feel free to drop me an email.

let's talk

Here’s to an innovative and exciting decade to come!

[Image: © Copyright Chris Downer and licensed for reuse under this Creative Commons Licence.]