Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

9 min read

Putting Test Data Coverage to The Test: Model-Based Data vs. ChatGPT

Putting Test Data Coverage to The Test: Model-Based Data vs. ChatGPT

We've been getting an interesting question lately: “Why not use ChatGPT to create test data?” On the surface, it sounds like a cool idea. Just type in what kind of data you want and hit “Enter”! Magically, you've got a response packed with your generated data.

The linguistic capabilities of ChatGPT, developed by OpenAI, are undeniably impressive. The model can generate rich, diverse, and naturalistic language data, serving as a plentiful source of test inputs. In this blog, we seek to look at the effectiveness and drawbacks of using ChatGPT for data generation.

What is ChatGPT, and How does it work?

ChatGPT by OpenAI, along with other large language models (LLMs), have taken the world by storm. If you haven’t heard of it by now, you’ve probably been living under a rock for the past 9 months. The LLMs and Generative AI will drive efficiency in every industry. The New York Times outlined several fantastic ways that people are already leveraging these technologies.

ChatGPT is so good at formulating responses, that it’s sometimes hard to remember that it’s not a real human who you are chatting with. Few people using ChatGPT know its inner workings; simply, it is using neural networks which simulate the neurons in our brains.

Trained on huge datasets (at a rumoured cost of over $100 million), the ChatGPT generation method relies on a principle of randomness. While it provides varied content, this also injects a degree of unpredictability into ChatGPT’s output. Essentially, it “guesses” the next word in a sequence, based on a combination of learned patterns and probabilistic calculations.

ChatGPT is trained to “guess” the next word in a sequence, introducing randomness and unpredictability.

ChatGPT is trained to “guess” the next word in a sequence, introducing randomness and unpredictability.

If you are considering possible applications of ChatGPT, there is a critical aspect to understand: While ChatGPT is highly efficient at producing human-like text, it does not truly “understand” the content it generates in a way that human authors do.

ChatGPT does not possess a world model, knowledge about specifics, or any form of consciousness. It operates solely based on patterns it has recognized in the data it was trained on, which is largely internet text.

Although this vast range of information contributes to the model's ability to generate diverse text, it also means the model may reproduce biases present in the data it was trained on, presenting challenges for its use in certain scenarios.

The model also lacks knowledge of your organisational context and domain, as required to test effectively. Fed primarily on internet text, it has no way of “knowing” private information about your organisation’s systems, processes, rules, standards and implementations:

Foundational Large Language Models (LLMs) can reproduce biases in their training data, and have not been fed data relevant to “understanding” your organisation.

Foundational Large Language Models (LLMs) can reproduce biases in their training data, and have not been fed data relevant to “understanding” your organisation.

In the next sections, we will dive deeper into the randomness of ChatGPT's generation method and its implications, particularly regarding the concept of test data coverage.

The Allure of ChatGPT for Synthetic Test Data Generation

Let’s look at some example prompts, and responses for generating data from ChatGPT. We’ll be using DataGPT, which is our wrapper around GPT4 for generating test data into a spreadsheet style format.

 

Prompt 1

"Given the business rules for credit card applications where to be approved a person must be over 18, have a credit score of at least 800, and have a good credit history, along with earning over $50,000 generate scenarios covering all possible outcomes including approval and rejection scenarios. Output the following columns, age, credit score, credit history, salary, application state.”

Response:

credit_score

credit_history

application_state

salary

age

850

Good

Approval

60000

20

780

Fair

Rejection

40000

25

820

Good

Approval

70000

35

750

Fair

Rejection

45000

19

810

Good

Approval

55000

28

790

Fair

Rejection

65000

30

830

Good

Approval

52000

23

770

Fair

Rejection

48000

27

840

Good

Approval

60000

32

760

Fair

Rejection

42000

22

 

Prompt 2

"In the context of an online banking system where a user can only transfer an amount up to their current balance, generate scenarios that cover both successful transactions within the limit and unsuccessful transactions that exceed the balance. Output the following columns: sender balance, receiver balance, amount to send, transaction approved or rejected, new sender balancer, new receiver balance."

Response:

new_receiver_balance

receiver_balance

new_sender_balance

amount_to_send

sender_balance

transaction_approved

700

500

800

200

1000

approved

4000

1000

2000

3000

5000

approved

600

600

250

300

250

rejected

900

400

1500

500

2000

approved

2000

2000

800

1200

800

rejected

28000

20000

2000

8000

10000

approved

1500

1500

4000

500

4000

rejected

110000

50000

90000

60000

150000

approved

3000

3000

1200

1500

1200

rejected

800

800

500

600

500

rejected

 

Prompt 3

"Given the business rules for an e-commerce platform's product return policy, where a customer can return a product within 30 days of purchase provided it's in the original packaging, is not damaged, and comes with a receipt, and the customer is then issued a refund within 14 days of the return approval. However, if the product falls into the category of 'non-returnable goods' (like perishable items, personalized items, digital downloads, etc.), no return or refund is possible. Furthermore, if the customer misses the 30-day return window but is within 60 days from purchase, they can still return the product, but they will only receive store credit instead of a refund. Generate scenarios that cover all possible outcomes, including successful returns and refunds, rejected returns, issuance of store credit, and non-returnable goods. Output the following columns, Purchase Date, Product Category, Product Condition, Receipt Included, Return Approval Status, Refund Issuance Date, Store Credit Issuance Date"

Response:

Refund Issuance Date

Purchase Date

Product Condition

Return Approval Status

Product Category

Receipt Included

Store Credit Issuance Date

2022-01-15

2022-01-01

New

Approved

Electronics

Yes

 
 

2022-02-01

Used

Rejected

Clothing

No

 
 

2022-03-01

New

Rejected

Perishable items

Yes

 
 

2022-04-01

New

Rejected

Digital Downloads

Yes

 

2022-05-15

2022-05-01

Used

Approved

Electronics

Yes

 

2022-06-15

2022-06-01

New

Approved

Clothing

Yes

 
 

2022-07-01

Used

Rejected

Perishable items

Yes

 
 

2022-08-01

Used

Rejected

Digital Downloads

No

 

2022-09-15

2022-09-01

New

Approved

Electronics

Yes

 
 

2022-10-01

Used

Approved

Clothing

Yes

2022-10-15

 

How can you ensure test data coverage?

On the surface, the ability to generate such diverse and varied data scenarios with just a few prompts appears incredibly empowering. It opens a realm of possibilities where large volumes of test data can be created quickly and without much manual effort. However, one crucial question remains: How do we ensure that the data generated covers all the rules we've identified?

This is where the concept of data coverage comes into play. Data coverage, in a testing context, refers to the amount of data variety that is covered in your test scenarios. It provides a measure of how comprehensively your testing includes different data scenarios, including edge cases, typical cases, and exceptions.

Ideally, testing wants high data coverage, meaning your tests include a wide variety of data scenarios. This ensures that your system can handle a broad range of inputs. Let’s take, for example, a coverage map, which is a visualisation technique, similar to a heatmap for viewing the distribution of available data.

In this example, for Prompt 3 above, we have mapped the Approved/Rejected status of different product categories. You’ll see a missing data scenario for Rejected Electronics:

In this example, for Prompt 3 above, we have mapped the Approved/Rejected status of different product categories. You’ll see a missing data scenario for Rejected Electronics:

Likewise, if we look at the credit history vs. application state for Prompt 1 above, we’ll see more data gaps. The data outputted by ChatGPT has a low level of coverage. This is evidently lower as we look across more dimensions:

Likewise, if we look at the credit history vs. application state for Prompt 1 above, we’ll see more data gaps. The data outputted by ChatGPT has a low level of coverage. This is evidently lower as we look across more dimensions:

Here's the hitch: While models like ChatGPT are adept at generating data, they generate data randomly. The core method of operation for models like these is predicting the next word in a sequence based on probabilistic calculations. As discussed, this introduces a degree of unpredictability and randomness into the generated data.

While you might get varied data, there’s therefore no guarantee that the data will cover all possible scenarios defined by your business rules. High data diversity doesn't necessarily mean high data coverage.

Poor data coverage undermines our ability to test our systems rigorously. Afterall, we can only test as well as the data we have available to us. If we require a specific scenario (e.g. approving an applicant with an excellent credit history), then we can’t test it without first having the data created. This is often why production data is not a good data source to use for testing, and why most organisations will turn to synthetic data generation.

So, while using AI models like ChatGPT can be an attractive proposition for test data generation, it's crucial to consider the implications of its randomness.

Model-Based Testing: An Alternative Approach

In contrast to the randomness of test data generation with LLM-based AI models (like ChatGPT), there's a more systematic approach that can ensure comprehensive data coverage: Model-Based Testing (MBT) for synthetic data generation.

Modelling is an approach to software testing in which test cases are derived from a model that describes the functional aspects of the system under test. In the context of test data generation, the “model” would be a formal representation of business rules.

So, how does it work? You start by building a model of your system's behaviour, which includes the various business rules and the different scenarios that can occur based on these rules. This model serves as a blueprint for generating test cases. As the test cases are directly derived from the model, they inherently cover all the scenarios and business rules represented in the model.

The major advantage of MBT is that it allows for systematic coverage of your business rules. Since the model is a comprehensive representation of the system's behaviour, generating test cases from this model ensures that all possible scenarios defined by your business rules are included in your testing.

MBT additionally also supports automatic test generation. Using appropriate tools, you can automate the process of generating test cases from the model, which can save significant time and effort compared to manual test case design:

A flowchart provides a clear representation of the “LBW” rule in cricket, auto-generating paths (“tests”) through the logic.

A flowchart provides a clear representation of the “LBW” rule in cricket, auto-generating paths (“tests”) through the logic.

Let’s look at the same business rules contained in our prompts earlier, representing them as models to generate data.

1. Credit Card Application

Let's look at a model for the credit card application process (Prompt 1 above). It’s much simpler to comprehend a visual model than to parse a text-based description of the same business logic:

Let's look at a model for the credit card application process (Prompt 1 above). It’s much simpler to comprehend a visual model than to parse a text-based description of the same business logic:

This is one of the key benefits of visualisation: it offers a more immediate, understandable depiction of the process. This has immense collaborative value, as subject matter experts can quickly assess the model for completeness, as opposed to the opaque nature of large language models (LLMs).

Within our model, we’ve recognized various equivalence classes and linked them to relevant decision points. Take, for example, the age criterion: we’ve identified several categories like over 18, exactly 18, exactly 17, and under 18. This modelling process results in the generation of a rich dataset that can be used for testing.

The model acts as a tangible representation of our business logic. Depending on the path taken through the model, a credit card application ends up either accepted or rejected. This methodical, visual approach provides us with a greater level of control and predictability than the unpredictable nature of LLMs.

Once we’ve established a model, we can use our path generation algorithm to create potential scenarios from it:

Once we’ve established a model, we can use our path generation algorithm to create potential scenarios from it:

This is where the concept of coverage comes into play. Depending on your needs, you can choose from various levels of coverage – from exhaustive, which creates every possible route through the model, to simpler options like node coverage, which ensures that every decision point in the model is visited.

One of the unique aspects of our algorithm is that it gives you control over how these paths are generated. For instance, with tagging, you can specify certain scenarios to be expanded upon or focused on. This flexibility allows you to be highly specific about the scenarios you want your test data to cover. It provides a level of precision and control that simply can’t be achieved with random data generation methods:

In the example above, we’ve opted for a node coverage level. This choice has resulted in 11 unique paths, each representing a different data scenario that aligns with the embedded business rules.

In the example above, we’ve opted for a node coverage level. This choice has resulted in 11 unique paths, each representing a different data scenario that aligns with the embedded business rules.

These generated paths can easily be exported into various formats such as Excel, message formats, or even directly into an external database. This flexibility not only supports a wide range of testing environments, but also simplifies the process of integrating the generated test data into your existing workflow.

2. Money Transfer

Let’s explore another illustrative example: the money transfer model (Prompt 2 above):

This model represents the various scenarios involved in money transfers, splitting them into two main pathways

This model represents the various scenarios involved in money transfers, splitting them into two main pathways: successful and unsuccessful transactions. It’s crucial to note that a successful transfer depends on the sender having sufficient funds to cover the transfer.

This model doesn't merely outline potential outcomes; it also specifies the conditions that lead to those outcomes. It's not just about success or failure of transfers, but the specific situations that lead to those results. This leads to a wealth of test data, reflective of real-world scenarios, that we can use to ensure our system behaves as expected under different conditions.

3. E-Commerce Returns

Let's dig into a more intricate example: the e-commerce returns model (Prompt 3 above):

This model governs the nuanced set of conditions necessary for product returns in an e-commerce environment.

This model governs the nuanced set of conditions necessary for product returns in an e-commerce environment. The return process depends on several conditions: the item's returnability, the return request being made within 30 days of purchase, the item being in its original packaging, the product's undamaged state, and the inclusion of the purchase receipt.

This model, with its multi-layered business rules, provides a rich array of test data that encapsulates various return scenarios. By simulating all possible outcomes - from successful returns to failed attempts - we can rigorously test the systems handling these processes, ensuring their readiness for any potential real-world scenario.

Building an enterprise test data strategy

When it comes to devising an effective test data strategy, it's important to understand that using tools like ChatGPT for test data generation is not a strategy in itself. While these tools can offer a diverse set of data, they fall short in terms of data coverage and consistency with business rules.

Moreover, tools like ChatGPT do not address other crucial aspects of test data management, such as data privacy, infrastructure costs, referential integrity, and the speed of delivering data into the right data targets.

Model-Based Testing (MBT), as demonstrated in this article, offers a more systematic approach to test data generation, ensuring comprehensive coverage of business rules. However, MBT is just one piece of the puzzle. An effective test data strategy requires more than just the generation of test data.

This is where comprehensive test data management tools, like those provided by Curiosity, come into play. We offer an enterprise-level solution that addresses all aspects of test data management, giving you the control, you need to ensure high-quality testing:

We offer an enterprise-level solution that addresses all aspects of test data management, giving you the control, you need to ensure high-quality testing:

Our suite of tools includes capabilities for data masking, ensuring that sensitive information is protected while still preserving the usability of the data for testing. We offer data subsetting capabilities, allowing you to create smaller, more manageable datasets that are still representative of your larger database.

Our synthetic data generation tool ensures that you have access to high-quality test data that adheres to your business rules. In addition, we provide data profiling capabilities, which help you understand the characteristics and quality of your data. We also provide a data dictionary, ensuring that there's a clear understanding of what each data element represents.

In conclusion, while AI models like ChatGPT can offer diverse data generation, they cannot replace a comprehensive test data strategy that ensures the coverage of business rules and addresses all aspects of test data management.

At Curiosity, we're committed to providing tools that not only meet these needs but also facilitate more efficient and effective testing, driving quality across your software development lifecycle. To learn more, book a meeting with one of our test data experts:

Speak with an expert

Test Data Strategy Success: Tech Debt & Data Delivery

6 min read

Test Data Strategy Success: Tech Debt & Data Delivery

Delays in testing are often due to testers waiting for data. These data provisioning bottlenecks are generally caused in part by an organisation’s...

Read More
The Democratisation of (Test) Data

5 min read

The Democratisation of (Test) Data

A glance at industry research from recent years shows that test data remains one of the major bottlenecks to fix in DevOps and CI/CD:

Read More
We Need to Talk About Test Data “Strategy”

4 min read

We Need to Talk About Test Data “Strategy”

For many organisations, test data “best practices” start and end with compliance. This reflects a tendency to focus on the problem immediately in...

Read More
8 Criteria for a Modern Test Data Solution

8 min read

8 Criteria for a Modern Test Data Solution

In 2023, (test) data availability, quality, and compliance risks remain a major headache for software development.

Read More
Test Data Strategy Success: Data Regulation

6 min read

Test Data Strategy Success: Data Regulation

In 2023, test data remains one of the biggest blockers to fast and effective software delivery. Outdated test data management (TDM) practices don’t...

Read More
Quality Testing Requires Quality Data

7 min read

Quality Testing Requires Quality Data

My two most recent blogs have made the case for a new TDM paradigm called “Test Data Automation”. The first article considered how a logistical...

Read More
Test Data is make or break for parallel testing and development

8 min read

Test Data is make or break for parallel testing and development

Today, there is a greater-than-ever need for parallelisation in testing and development. “Agile” and iterative delivery practices hinge on teams...

Read More
Test Data Strategy Success: Technology and Methodology

7 min read

Test Data Strategy Success: Technology and Methodology

Today, organisations utilise and adopt a range of technologies, both old and new, in service of enabling their “agile” delivery methodologies. Yet,...

Read More
Is test data the engineering problem to solve in 2024?

6 min read

Is test data the engineering problem to solve in 2024?

It’s 2024 and the risks associated with poor test data practices show no signs of abating.

Read More