Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

9 min read

Second decade of a new millennium: observations and predictions for QA

Second decade of a new millennium: observations and predictions for QA

The past decade has seen me sell one start-up, join a multi-national, and co-found my latest venture: Curiosity Software Ireland. I am grateful for the opportunities that the 2010s presented for collaboration with old friends, while working with new organisations to solve challenges in software delivery.

As with any decade, the “teenies” brought numerous shifts in testing and development. These developments have enabled organisations to deliver ever-more powerful systems, at even faster speeds. We today produce software that just two decades ago was confined to sci-fi imagination.

Evolutions and revolutions in processes, tools, and teams have offered increasing flexibility and facilitated these new opportunities. However, the same changes have created new problems, each requiring new thinking. Below, I discuss six broad observations regarding some challenges that emerged last decade, commenting on how the QA community has moved to meet them.

The list is far from exhaustive. I’ve deliberately avoided some of the broadest trends like partial shifts away from Waterfall, and the rise of CI/CD, DevOps, BDD, and more. You can find plenty of solid research about these trends online.

I hope, instead, that the below provides some new food for thought and reason to pause as we enter a new decade, which will inevitably bring new challenges and opportunities for further change. Where relevant, I’ve also included some links for you to see some of the technologies that I’ve helped produce to meet the challenges discussed. I’ve furthermore concluded with some rather speculative predictions.

1.   Some testing got a bit more automated (some things a bit more manual)

Just four years ago, I was speaking with organisations about why they should automate test execution. Today, organisations either have test automation frameworks in place and want support increasing their adoption, or otherwise have automation on their immediate horizon.

Automating execution made a whole world of sense, and met an immediate need created by the shift to iterative development. Releasing in short iterations has enabled faster innovation, creating increasingly complex systems in shorter periods of time. However, this created the prefect storm for a greater number of test cases that need executing, with a shorter time allowed to execute them. Manual test execution was simply too slow for this, whereas automation could perform the same tests faster and in parallel:

Yet, automating test execution introduced a raft of additional processes into the mix. Three tasks in particular prevent organisations today from achieving sufficient levels of automated test execution: manual test creation, test script, maintenance, and the allocation of data for data-hungry automation frameworks:

Fresh tooling has arisen to meet these challenges. These tools have dethroned test automation, which itself was previously pitched as a silver bullet for QA bugbears. Organisations are turning increasingly to automated test creation solutions, and AI technologies promise to build on these in supporting reactive test script maintenance. These techniques are proving particularly valuable when they tie test data to test creation and maintenance.

Any automated test generation must, however, be capable of creating tests for bespoke systems, and should be able to optimise the tests for maximum coverage. This is why few organisations last decade stuck purely with record/playback techniques and “low code” test design. An increasing number are turning to model-based approaches, which can integrate with bespoke frameworks to test custom system logic:

2.   The rise of the “TestDev” and Software Development Engineer in Test (SDET)

The move to test automation has wrought an associated change in testing teams. Most automation frameworks today are coded, requiring scripting to define tests. As Angie Jones comments, this requires individuals who “possess the skillset of a developer and the mindset of a tester.” However, existing test teams at many organisations do not possess these requisite engineering skills. There have been broadly two responses to this.

The rise of the SDET

Firstly, a new job title has emerged: Software Development Engineering in Test, or SDET. These new team members have development backgrounds and the coding skills needed to build automation frameworks.

Yet, recruiting SDETs is not a complete solution to test automation adoption. For starters, skilled automation engineers are in high demand, while most developers prefer working in development. Automation engineers are often therefore unavailable or are prohibitively expensive to hire.

Recruiting a small core of skilled engineers is furthermore incapable of achieving the levels of automated test execution that organisations desire. It ignores the majority of test teams, most of which possess valuable knowledge regarding the system under test. These same teams furthermore possess the requisite “mindset of a tester”.

Testers become engineers?

Many organisations today therefore adopt a second approach, expecting testers to learn to work directly with automated test scripts. This itself is challenging, particularly as QA teams cannot simply down tools, dropping existing processes and learning to code their tests from scratch. They must instead be able to weave their budding automation journey into existing test cycles.

The past decade has therefore seen the rise of de-skilled approaches like scanners and recorders, as well as a proliferation in “teach yourself” automation education. The challenge is that the automation must be simple enough to be learned alongside a day job, but complex enough to test custom system logic.

This balance is hard to find, and organisations today are finding that “out-of-the-box” automation libraries can only get them so far. Often, they find that system logic like unsupported elements require a return to manual test execution, with automation covering a % of their system.

Enterprise-wide adoption in the 2020s?

Enterprise-wide adoption of test automation should instead combine both approaches, leveraging the expertise of existing test teams alongside new skills brought by SDETs.

This allows automated testing of complex systems. “De-skilled” test creation can test any automatable system, if it is capable of leveraging the custom code created by SDETs. This small core of engineers can in turn focus on creating new code to test custom system logic, constantly making this code re-usable by broader test teams:

Re-usability is therefore king in automation adoption, and furthermore reduces time spent on repetitious scripting and maintenance. Meanwhile, it builds on the skills and subject matter expertise that exists within organisations today.

3.   Open Source is more vibrant as ever

There was an influx of Open Source test tooling throughout the last decade. This might be a consequence, partly, of this influx of developers into QA. Dev has a long tradition of community-driven projects, applying engineering know-how to build free solutions to meet pressing problems. Some of these solutions lead innovation, becoming the most popular tooling and the basis for numerous commercial offerings. Meanwhile, community events and friendships built through collaboration keep the innovation and resources rolling continuously on.

Open Source Automation

It is perhaps no surprise then that many of the most popular automation frameworks are Open Source. For functional web testing, this includes Watir and Selenium, as well as the community-built bindings for various languages. Appium is likewise a leader in mobile automation, while JMeter and Taurus are popular for low-level performance testing. Testers working with applications can similarly enjoy AutoIt and Winium. New frameworks continue to emerge, including the increasingly popular Cypress and Selenide.

DevOps Toolchains and New Types of System Under Test

The adoption of cutting-edge OS technologies has also affected tools auxiliary to testing and has furthermore altered the nature of systems under test. Organisations large and small are today using Jenkins for CI/CD while testers are checking automation code into Git repositories. Systems under test today might also be built, for example, on cutting-edge database and data processing tools, including Solr, Kafka, and Hadoop.

A Vibrant Testing Community.

In parallel to the rise of community-built tooling, the dynamic of the QA community has also evolved in the past decade. It is in some ways similar to the development community. Community-organised events have grown, with local meetups and user groups organised worldwide. These events combine online and in-person opportunities to share skills and nurture our enthusiasm, and I take my hat off to hardworking organisations like Ministry of Testing and Vivit, as well as QA evangelists like Joe Colantonio and Jonathon Wright.

4.   Innovation and complexity are showing no signs of slowing down

These new tools are enabling faster innovation that ever before, while iterative, parallel development is adding new functionality faster than ever. There has been a parallel shift from monolith systems to microservices, containers, and APIs. This flexibility enables developers to glue together building blocks faster than ever, but testers are then faced with more and more inputs and outputs to test.

Testers today must therefore capable of testing more, but are facing shorter timeframes in which to identify and execute tests. 10 years ago, I was helping test teams optimise test cases from hundreds or thousands to a several dozen. Today, QA face millions of possible paths through system logic, each of which could be a test requiring scripts and data. Meanwhile, sprawling technologies require a range of drivers and technologies, in addition to wide-reaching expertise.

Coordinated chaos or flexibility with structure?

This vast complexity has fed three related trends.

Firstly, testers are as wary as ever of vendor lock. If developers adopt a new technology, testers want to be able to adopt the best-of-breed technology with which to test that system.

Secondly, then, QA often today pick test tools based on their fit with the system under test, and this is particularly necessary for automation drivers.

Third, there is a desire to combine sprawling technologies in a unified approach, while still retaining the unique value of each tool. The past decade has therefore seen the rise of Robotic Process Automation to connect disparate DevOps technologies, as well as attempts to establish a “single pane of glass” approach. These approaches work best when one person can make a change in one tool, with that information rippling accurately across all associated technologies:RPA for DevOps

5.   Testing is increasingly decentralised (but is still a specialism!)

Another shift in team structure reflects a move away from central “Centres of Excellence” and a siloed test team managed by a dedicated test manager.

The adoption of agile approaches has emphasised cross-functional teams and testers often today sit alongside developers and BAs. “Shift left” approaches have furthermore demanded closer collaboration between those who design, develop and test systems. Meanwhile, test automation has increased overlap in testing and development skills.

So, have we moved beyond the old adage that those who create systems should not also test them? Or is “testing” as a discipline now lost in the primordial soup from which systems emerge from a loose collection of cross-functional teams?

Yes and no. The testing specialism might now be distributed across teams and organisations, but the specialist skillset remains distinct. The shift in team structure and delivery patterns reflect a drive to foster collaboration and communication, detecting bugs earlier and avoiding time-consuming rework. It is not meant to wholly collapse a distinction between system design, development and testing.

As ever-more complex systems continue to be delivered ever-faster, the need to test and facilitate testing is not going to go away. Just ask those engaged in the most testing activities in any cross-functional team. Often, that same individual was the primary tester in their previous role, and the one in a cross-functional team before that. They bring with them a distinct set of experience and skills, though not one confined today to a silo or centre.

6.    Compliance became everyone’s problem

No reflection on the past decade would be complete without mentioning shifts in data privacy.

Compliance was a concern for testing before the 2010s, with existing legislation like the UK Data Protection Act and the The Health Insurance Portability and Accountability Act (HIPAA) in the US. However, new legislation increased the scope of data privacy, while also significantly increasing the risk of non-compliance.

Meanwhile, regulators are showing their willingness to impose ever-higher punishments. The UK’s Information Commissioner’s Office (ICO), for example, closed last decade by imposing a record fine of £183 million, smashing their previous record of £500,000.

Generally speaking, new legislation like the EU General Data Protection Regulation makes it riskier than ever to use production data in less secure test. It furthermore gives individuals today more control over their data than ever before. The GDPR, for instance, allows EU data subjects to request the erasure of their data “without delay”, and they can also request a copy of their data in a format usable by them.

The increased rights of individuals presents a logistical nightmare for existing Test Data Management techniques. Many organisations still store data across a sprawling, poorly understood estate, as well as on tester’s local machines. These organisations often do not know where data is stored, and will therefore struggle to identify, copy and delete it on demand.

Concerns about data privacy are only going to increase as we rely ever-more on data-driven technologies. Regulation will therefore continue to move proactively and reactively to meet these concerns. Compliance, more than ever, will continue to be a key requirement in testing.

So, have we found the Millennium Bugs yet?

The last decade has therefore led significant achievements in testing efficiency and innovation. This will continue into the 2020s, and the three broad trends indicated by this article will continue:

1.    The complexity of the systems under test will continue to grow. Developers will deliver systems faster than ever, drawing on new technologies to build more complex systems than ever.

2.    Test process automation will increase. In the short term, this the focus on automated test design and maintenance, while process automation will continue to focus on automating rule-based tasks and keeping DevOps tooling in alignment.

3.    The role of “intelligence” and smart systems will increase across testing and development. This will grow as organisations begin to adopt now emergent technologies.

The 2020s: A decade of complexity, automation, and (maybe) intelligence?

These three trends are closely related. Firstly, the speed at which systems are developed and the introduction of intelligence will add rapidly to system complex. Software systems have long been more complex than any one person can understand. However, the ability to process “big data” rapidly and apply decision-making to data sets will soon mean that individual decisions made by systems will become beyond human comprehension. Put simply, there are going to be black boxes on an increasingly microscopic scale, feeding autonomous black boxes (systems) on the macro scale.

Secondly, this massive complexity will require automated test creation, in order to test the unprecedented number of decision gates sufficiently. This automation will, thirdly, require an increasing degree of intelligence.

Intelligence will be necessary to overcome the uncertainty created by autonomous and intelligent systems. Manually reverse-engineering and testing decisions made by smart systems will simply be too slow and unreliable.

Intelligent, automated test deign, by contrast, promises to continuously create and execute tests based on the latest live data. Executing these tests continuously will in turn work to better understand the microscopic workings of intelligent systems. Continuous experimentation will therefore reduce the number of “knowable unknowns”, and the negative risk associated with it.

The growing combination of automation and intelligence will fight fire with fire. Autonomous test design will draw rapidly on vast quantities of available data, in order to build tests for systems that intelligently process vast quantities of complex data.

Put simply, a smart approach is needed to meet the intelligence of systems under test, but this will likewise mean that testers pass off many decision-making processes to computers.

Now is the time to fix your information flow

So, what can organisations do today to prepare for these shifts in the next decade? Firstly, you must fix the information flow across technologies and teams.

Automation and AI have both rely on the quality of the information fed in. Otherwise, you get a “garbage in, garbage out” scenario, producing fundamentally broken tests with unreliable results.

Organisations should therefore continue on the current path of connecting existing systems, drawing arrows between DevOps tooling. They should focus on automating rule-based processes and increasing the accurate flow of information between tooling and teams. That way, they can collect more data and metadata, creating an “AI ready” data lake. This will in turn ensure the eventual value of intelligent technologies:

Thanks for reading! This list is far from complete, and I’d love to know your observations for the last decade and predictions for the next. Please feel free to drop me an email.

Book a Demo

Here’s to an innovative and exciting decade to come!

[Image: © Copyright Chris Downer and licensed for reuse under this Creative Commons Licence.]

 

The Philosophy of Application Delivery:Was Plato a Model-Based Tester?

The Philosophy of Application Delivery:Was Plato a Model-Based Tester?

This is part one of three of the “Philosophy of Application Delivery” eBook. The eBook contains Curiosity Software Ireland’s musings on the nature of...

Read More
BDD is much more than Gherkin or Cucumber

BDD is much more than Gherkin or Cucumber

Behaviour Driven Development is, at its heart, about communication. It’s not about using Gherkin to formulate specifications, or Cucumber to run...

Read More
Going lean on your testing approach

Going lean on your testing approach

When teams are looking to transform, optimize, or cut costs in testing, where do they first look? More often than not, they follow the advice given...

Read More
5 Reasons to Model During QA, Part 2/5: Automated Test Generation

5 Reasons to Model During QA, Part 2/5: Automated Test Generation

Welcome to part 2/5 of 5 Reasons to Model During QA! Part one of this series discussed how formal modelling enables “shift left” QA. It discussed how...

Read More
Shift Left Quality With Curiosity's Modeller

Shift Left Quality With Curiosity's Modeller

Software delivery teams across the industry have embraced agile delivery methods in order to promote collaboration between teams and deliver new...

Read More
Containers for Continuous Testing

Containers for Continuous Testing

Application development and testing has been revolutionised in the past several years with artifact and package repositories, enabling delivery of...

Read More
Continuous Development: Building the thing right, to build the right thing

Continuous Development: Building the thing right, to build the right thing

Test Automation is vital to any organisation wanting to adopt Agile or DevOps, or simply wanting to deliver IT change faster.

Read More
Harnessing the Power of Visualization for Generative AI in Software Quality

Harnessing the Power of Visualization for Generative AI in Software Quality

In the world of software development, generative AI has established itself as a formidable ally, assisting developers in coding and detecting...

Read More
Chat to Your Requirements: Our Journey Applying Generative AI

Chat to Your Requirements: Our Journey Applying Generative AI

In the digital age, large enterprises are plagued by a lack of understanding of their legacy systems and processes. Knowledge becomes isolated in...

Read More