5 Techniques for Overcoming Test Data Bottlenecks
The demand for ever more complex, varied, and larger data has created a situation where QA can no longer keep up. Burdened by outdated Test Data...
Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality!
Product Overview | Solutions |
Success Stories | Integrations |
Book a Demo | Release Notes |
Free Trial | Brochure |
Pricing |
Our innovative solutions help you deliver quality software earlier, and at less cost!
AI Accelerated Quality Scalable AI accelerated test creation for improved quality and faster software delivery.
Test Case Design Generate the smallest set of test cases needed to test complex systems.
Data Subsetting & Cloning Extract the smallest data sets needed for referential integrity and coverage.
API Test Automation Make complex API testing simple, using a visual approach to generate rigorous API tests.
Synthetic Data Generation Generate complete and compliant synthetic data on-demand for every scenario.
Data Allocation Automatically find and make data for every possible test, testing continuously and in parallel.
Requirements Modelling Model complex systems and requirements as complete flowcharts in-sprint.
Data Masking Identify and mask sensitive information across databases and files.
Legacy TDM Replacement Move to a modern test data solution with cutting-edge capabilities.
See how we empower customer success, watch our latest webinars, read our newest eBooks and more.
Events Join the Curiosity team in person or virtually at our upcoming events and conferences.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Help & Support Find a solution, request expert support and contact Curiosity.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Documentation Get started with the Curiosity Platform, discover our learning portal and find solutions.
Integrations Explore Modeller's wide range of connections and integrations.
Curiosity are your partners for designing and building complex systems in short sprints!
Meet Our Team Meet our team of world leading experts in software quality and test data.
Our History Explore Curiosity's long history of creating market-defining solutions and success.
Our Mission Discover how we aim to revolutionize the quality and speed of software delivery.
Our Partners Learn about our partners and how we can help you solve your software delivery challenges.
Careers Join our growing team of industry veterans, experts, innovators and specialists.
Press Releases Read the latest Curiosity news and company updates.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Contact Us Get in touch with a Curiosity expert or leave us a message.
3 min read
Thomas Pryce 07 November 2019 12:15:55 GMT
Last week, we published a blog making the case for the next generation in TDM “best practice”. We considered why the logistical approach of “mask, subset, clone” provisioning cannot provide the data parallel test teams need, when they need it.
This week’s blog considers the benefit of “Test Data Automation” from the perspective of one of the core TDM requirements: test data compliance. In particular, this blog sets out the repercussions of the EU General Data Protection Regulation (GDPR) for testing, and how a new TDM paradigm can ensure compliance while also maximising testing speed and quality.
The proposal for the EU General Data Protection Regulation (GDPR) was made as long ago now as 2012, and the Regulation was adopted in 2016. Throughout this time, two broad responses to the tightening legislation has been common among testers:
Fast forward five years and the implementation period is now over. The GDPR is now in force, and eye-watering fines cast doubt on the responses of both the sceptic and gambler. The steep punishments levied recently are a reminder of the real threat of data breaches, but also a serious statement of intent regarding the enforcement of the GDPR.
In July, for example, the UK’s Information Commissioner’s Office (ICO) announced a record fine of £183 million for British Airways, relating to the harvesting of 500,000 customer details by attackers. That reflects roughly 1.5% of BA’s annual worldwide turnover for the previous year, smashing the ICO’s previous record fine of £500,000. National enforcement agencies appear willing to impose the full force of the GDPR’s deterrents.
The announcement of an intended £99.2 million fine for Marriott International came a day later, relating to the exposure of 339 million guests’ information. 30 million of the guests records belong to Europeans, but Marriott is a US company. This dispels the further scepticism regarding the ability of national agencies to enforce the GDPR’s global scope.
Authorities in each instance point to a lack of sufficient security measures, and also to the responsibility organisations of every size have for the data they process. So, how does this relate to testing practices?
From a QA perspective, one glaring practice screams security risk: the use of production data in test and development environments. This has long been warned against from a data privacy perspective, yet 65% of organisations still use potentially sensitive production data in testing.[i]
Production data does appear an obvious place to source production-like data for testing. The issue is that test and development environments are necessarily less secure than production, so that any sensitive data stored in them increases the risk of a data breach.
Then there’s the rights of European Data Citizens, which have been strengthened by the GDPR. These rights apply regardless of whether a data breach has occurred, and present further challenges for current QA practices.
The Rights to Data Erasure and Data Portability are good examples. An EU Data Subject can request all that all their data is erased “without delay,” and can also ask for a complete copy of their data stored by an organisation.
This presents a logistical nightmare for current Test Data Management (TDM) practices. Many organisations store data across test environments, in unmanaged formats like spreadsheets on testers’ local machines. Such organisations struggle to know where certain data is kept, and will therefore struggle to identify, copy and delete it on demand.
The good news is that using production data in test environments is frequently avoidable. Synthetic test data generation is today capable of generating realistic test data for even complex systems, rapidly mirroring the data dependencies found in production.
Quality synthetic test data is built from a model of the metadata found in production. It reflects even complex patterns in data like temporal trends, all while remaining wholly fictitious. It therefore supports accurate and stable test execution, without the risk of exposing sensitive information.
The benefit of increased security is furthermore coupled with a significant quality gain for QA. Synthetic data can be generated for the numerous data combinations not found in existing production data, including the negative scenarios and outliers needed for complete test coverage.
Improving data security in testing is not therefore just a logistical issue: it can drive up test coverage, improving the quality of software and reducing defect remediation efforts.
Organisations will not be able to switch to using wholly synthetic test data overnight. Nonetheless, an effective TDM strategy should aim to replace production data gradually with fictitious test data. This “hybrid approach” continues working with production data where needed, in time replacing all test data sources with fictitious, coverage-enhanced equivalents. Testers and data protection officers (DPOs) can then enjoy peace of mind, all while improving application quality.
Thanks for reading! Please feel free to share your thoughts using my email address below, and look out for next week’s blog on creating high-coverage test data sets. To learn more about how test data compliance can also maximise testing rigour, please join Curiosity’s Huw Price in the DevOps Bunker. On the webinar, Huw will consider why “The Time for a New Test Data Paradigm is Now“.
[i] Redgate (2019), State of Database DevOps, 23. Retrieved from http://assets.red-gate.com/solutions/database-devops/state-of-database-devops-2019.pdf on 19 June 2019.
The demand for ever more complex, varied, and larger data has created a situation where QA can no longer keep up. Burdened by outdated Test Data...
The previous article in this series set out how a successful data migration hinges on a range of criteria:
Today, there is a greater-than-ever need for parallelisation in testing and development. “Agile” and iterative delivery practices hinge on teams...
My two most recent blogs have made the case for a new TDM paradigm called “Test Data Automation”. The first article considered how a logistical...
As a result of the constantly evolving environment of global data protection legislation, test data management has become increasingly complex....
I’ve been harping on about GDPR and other recent developments in compliance for years now, and it’s good to see QA organisations are now seriously...
Software delivery teams across the industry have embraced new(ish) approaches to development, from the different flavours of agile, to DevOps,...
Part one in this article series summarized the shockingly high failure rates for migration projects, identifying data migration as a key area of...
Ever-tighter data privacy legislation like the EU General Data Protection Regulation and The California Consumer Privacy Act has made test data a...