Test Data is make or break for parallel testing and development
Today, there is a greater-than-ever need for parallelisation in testing and development. “Agile” and iterative delivery practices hinge on teams...
Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality!
Product Overview | Solutions |
Success Stories | Integrations |
Book a Demo | Release Notes |
Free Trial | Brochure |
Pricing |
Our innovative solutions help you deliver quality software earlier, and at less cost!
AI Accelerated Quality Scalable AI accelerated test creation for improved quality and faster software delivery.
Test Case Design Generate the smallest set of test cases needed to test complex systems.
Data Subsetting & Cloning Extract the smallest data sets needed for referential integrity and coverage.
API Test Automation Make complex API testing simple, using a visual approach to generate rigorous API tests.
Synthetic Data Generation Generate complete and compliant synthetic data on-demand for every scenario.
Data Allocation Automatically find and make data for every possible test, testing continuously and in parallel.
Requirements Modelling Model complex systems and requirements as complete flowcharts in-sprint.
Data Masking Identify and mask sensitive information across databases and files.
Legacy TDM Replacement Move to a modern test data solution with cutting-edge capabilities.
See how we empower customer success, watch our latest webinars, read our newest eBooks and more.
Events Join the Curiosity team in person or virtually at our upcoming events and conferences.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Help & Support Find a solution, request expert support and contact Curiosity.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Documentation Get started with the Curiosity Platform, discover our learning portal and find solutions.
Integrations Explore Modeller's wide range of connections and integrations.
Curiosity are your partners for designing and building complex systems in short sprints!
Meet Our Team Meet our team of world leading experts in software quality and test data.
Our History Explore Curiosity's long history of creating market-defining solutions and success.
Our Mission Discover how we aim to revolutionize the quality and speed of software delivery.
Our Partners Learn about our partners and how we can help you solve your software delivery challenges.
Careers Join our growing team of industry veterans, experts, innovators and specialists.
Press Releases Read the latest Curiosity news and company updates.
Success Stories Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.
Blog Discover software quality trends and thought leadership brought to you by the Curiosity team.
Contact Us Get in touch with a Curiosity expert or leave us a message.
The previous article in this series set out how a successful data migration hinges on a range of criteria:
Unfortunately, this is rarely the case at organisations today. This exacerbates the migration risk factors set out in the part two of this series, and underpins the unacceptably high migration failure rates set out in part one.
Fortunately, a unified solution can mitigate these risks, establishing the upfront understanding, requirements and data needed for a successful migration.
Want to read every article in this series? Download this entire article series in Curiosity's latest eBook, How to avoid costly migration failures.
This unified approach to data migration is made up of three overlapping stages:
A unified approach for data migration success.
Let’s now look at particular tools that can support these three overlapping stages in a unified approach to data migration.
Let’s look first at tools for understanding the legacy system data.
Organisations typically rely primarily on subject matter expertise for knowledge regarding legacy system data. However, for poorly documented legacy systems, there’s a fair chance that SMEs will have left the organization, taking knowledge with them. Given the sheer complexity of legacy system data, there’s also the risk that people’s understanding will be inaccurate and incomplete.
Relying on human knowledge alone is simply not enough to mitigate the risk of migration failures caused by poor system understanding.
Human assumptions should instead be verified and validated using technology, feeding this uncovered knowledge into “living documentation” of the system under migration. This averts risk, while “future proofing” development by maintaining understanding for future development:
Automated data analysis verifies and helps complete human understanding regarding legacy data, feeding up-to-date “living documentation” to avoid growing technical debt.
During a migration, automated data analysis and modelling help understand what legacy data exists, along with the relationships that must be respected in the migrated system. This includes relationships within and across databases, such as primary and foreign keys.
Automated analysis can additionally perform averages, counts and aggregates, and can measure the skewness of data. It can further identify minimum and maximum values, while “kurtosis” identifies rare values:
Automated data analysis provides understanding of legacy system data and identifies gaps in test data.
In addition to supporting understanding the legacy system data, this analysis also helps to understand missing data needed for testing.
Automated data comparisons between environments provide an additional tool for understanding data, which can again be used to identify data needed for rigorous testing. For example, you might compare data density in production and development environments to identify gaps, or might compare data before and after you’ve transacted against it.
Running “fuzzy comparisons” of data in the legacy and migrated system data further provides a quick approach to testing, looking for key differences between data before and after a migration:
Automated data comparisons aid with understanding of the legacy system data and offer a rapid technique for testing data migraitons.
Data profiling offers specialised analysis that identifies potentially sensitive information. It searches for personally and commercially sensitive information in data, for instance searching column names, using regular expressions, and matching data against seed lists.
Data profiling is valuable for mitigating compliance risks during a migration. It might, for example, inform test data masking and generation. This reduces the spread of sensitive information to less-secure non-production environments, while supporting the principles of data minimisation and purpose limitation.
Understanding of legacy and migrated systems must be stored and updated iteratively. Otherwise, you risk growing technical debt, and quality issues stemming from incomplete or ambiguous system understanding.
Formal modelling offers a requirements gathering technique that lends itself well to complex data structures, while often also being easily maintainable as “living documentation”.
Visual models are well-suited to map out complex data journeys, as they mirror data equivalence classes in a concise system picture. Flowcharts, for instance, map out a system’s cause and effect logic into a series of “if this, then this” statements. This shows complex system structures clearly and concisely, allowing you to document and understand overlapping data journeys at a glance:
Formal flowcharts map complex data structures clearly and concisely, providing an understanding of the “data journeys” under migration.
Logical modelling is overall a better fit for data-driven systems than inherently-ambiguous, written requirements.
Flowcharts offer the additional benefit of being familiar to many business analysts and product owners, enabling close collaboration between system testers, developers and designers. The formal nature and logical precision of flowcharts further enables automated test and data generation, enabling iterative testing as the models change throughout a migration project.
As discussed in part three of this series, testing late and with incomplete test data are key causes of data migration failures. Auto-generating test cases, scripts and data from completed requirements models mitigates this risk, iteratively generating complete and compliant data for migration testing.
Flowchart modelling of the data under migration enables the application of automated test and data generation algorithms. The logically-precise flowcharts act as a directed graph, to which automated graph analysis can be applied. These algorithms act like a car GPS, identifying possible routes through a city map:
Formal flowchart models enable automated test generation, identifying positive and negative combinations for rigorous data migration testing.
Each path through the modelled data is equivalent to a data journey and a test case. The automated test generation algorithms can generate an “exhaustive” test suite that covers every path through the model. In most cases, however, optimisation algorithms create the smallest set of paths required to cover each distinct logical combination, or to satisfy a given risk profile. This reduces the total test volume, while generating a rigorous set of data for migration testing.
The automated test generation creates the positive and negative data scenarios needed to avoid costly bugs and production outages post-migration. Using Curiosity’s Test Modeller, the creation of consistent data journeys can be driven by data generation functions defined at the model level.
Alternatively, the flowcharts can embed reusable Test Data Automation jobs. These jobs find, make and mask data as tests are auto-generated, ensuring that every test scenario is fulfilled by complete and compliant test data:
Integrating reusable Test Data Automation jobs with automated test generation ensures that every migration test comes equipped with complete and compliant data.
Rigorous testing with this approach can far start earlier during migration projects, because test and data generation is rapid and rooted in the requirements. “Shift left” testing can therefore occur early, replacing the late-stage testing that leaves no time to find and fix quality issues during a migration:
“Shift left” migration testing is rooted directly in requirements and can begin far earlier during a migration project.
This unified approach to data migration avoids the 4 common for migration failures, which were identified in part three of this series. To conclude this series, let’s summarise how the proposed solution avoids these common data migration pitfalls:
Data migration challenge |
Solution |
You don’t understand the legacy system data |
Automated data analysis provides understanding of data contents and structure. This understanding is fed into flowcharts that provide up-to-date “living documentation”. |
You don’t understand the new system functionality |
Intuitive flowcharts clearly map the new system requirements, providing concise documentation of complex data transforms. |
You don’t have varied data to test the new system |
The requirements models drive automated test and data generation, creating the positive and negative scenarios needed for rigorous migration testing. The data is compliant with data privacy regulations, as it has either been generated from scratch, or masked as it is sourced on-the-fly by Test Data Automation. |
You tested too late |
Test generation is rooted in requirements models, and can occur early and iteratively throughout the migration project. |
The value of this approach does not, however, start and end with the migration project. By implementing this approach for a migration project, you are further equipping your teams with the requirements, tests and data needed for future development. This helps “future proof” the migrated system. It provides the tools needed for ongoing innovation, as well as the requirements and data needed to eventually migrate away from the system:
The models used to drive rigorous migration testing are easy-to-maintain, enabling in-sprint test generation for the migrated system.
The unified approach set out in this article series accordingly aims to ensure a successful migration, while developing the tools needed for ongoing development post-migration. To learn more about these techniques for migration success and rapid development, book a meeting with a Curiosity expert today.
Read every article in this series Curiosity’s latest eBook!
Today, there is a greater-than-ever need for parallelisation in testing and development. “Agile” and iterative delivery practices hinge on teams...
Part two in this series identified some key risk factors associated with a data migration, which underpin the shocking migration failure rates...
If you’re reading this, you’re probably already painfully familiar with the complaints that Curiosity hear from organisations seeking alternatives to...
For many organisations, test data “best practices” start and end with compliance. This reflects a tendency to focus on the problem immediately in...
Curiosity Software Ireland, creators of Test Data Automation, and Windocks, on demand database specialists, today announced a joint solution for...
Curiosity often discuss barriers to “in-sprint testing”, focusing on techniques for reliably releasing fast-changing systems. These solutions...
Today, many organisations rely on rudimental tools and techniques for creating and managing their test data. These outdated techniques not only...
Welcome to part 4/4 of this article series discussing the new paradigm in Test Data Management: “Test Data Automation”. The drawbacks of traditional...
In 2023, (test) data availability, quality, and compliance risks remain a major headache for software development.