Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

7 min read

Chat to Your Requirements: Our Journey Applying Generative AI

Chat to Your Requirements: Our Journey Applying Generative AI

In the digital age, large enterprises are plagued by a lack of understanding of their legacy systems and processes. Knowledge becomes isolated in silos, scattered among various teams and subject matter experts. This fragmentation contributes significantly to the growth of technical debt, a silent killer that gradually hinders the organization's agility and productivity.

At Curiosity Software, we have spent the past five years creating structured requirements (through visual models) and connecting into hundreds of DevOps tools. We believe this puts us in an incredibly advantageous position when implementing Generative AI. The models and DevOps artifacts can act as the central hub of access to data flowing through an organisation’s software development landscape.

As part of our mission to combat the prevalent challenges of technical debt and missing knowledge, we embarked on a journey to apply Large Language Models (LLM) to software requirements and business logic. Our goal was to create a knowledge hub for an organisation’s software requirements, which can be queried to uncover knowledge and pay off technical debt.

Want to learn more about how I’ve been applying Generative AI to pay off technical debt? Join me live on July 13th for “AI in Testing: A Panel Discussion”.

Watch Now

Understanding the Terrain of Large Language Models (LLMs)

Before discussing our experiences and insights, let's set the stage by providing a brief overview of Large Language Models. LLMs, like OpenAI's GPT-4, are capable of understanding and generating human language. They learn from a vast corpus of data, encompassing a wide range of topics and language styles.

These LLMs work on a deep learning model known as a Transformer, which uses layers of self-attention mechanisms (neural networks) to analyse and understand context within data. During training, they read countless sentences and paragraphs and make predictions on what comes next in a sentence. These predictions are based on what they've read so far:

training large language models graphic

The amazing thing about an LLM is its ability to generate human-like text. This comes when it is trained on a large corpus of data and has a high number of parameters or weights in the model (GPT-4 has hundreds of billions):

nueral network, how LLMs work graphic

This uncanny ability to respond like a human is attributable not only to the intelligence inherent in the algorithm, but also to the discovery that human language may not be as intricate as we first believed.

By exposing a neural network to sufficient instances of human language, an LLM can discern patterns and respond aptly to inquiries. As the size and quality of data a LLM is trained on increases, the power of a LLM grows exponentially with it.

Structured Data is Essential to LLM Success

For software organisations today, the ability to leverage LLMs has never seemed closer. There are incredibly compact, specialised models which can run on a local PC, and which rival the performance level of GPT-3.5.

Yet, the most important piece of an LLM is not the available tooling: it’s the data that an LLM has been trained on. This is therefore also the main obstacle to the successful use of LLMs, as organisations today lack structured data in the software domain.

Software requirements are typically stored in an unstructured textual format and, worse still, are often incomplete, ambiguous, and missing information. The same is true for test cases, which are often stored as lists of textual based test steps.

Overcoming this structured data issue requires innovation and careful consideration. Techniques like information extraction, natural language understanding, and machine learning can be employed to transform unstructured data into structured data. This process often involves manual human intervention.

An alternative solution to training an LLM does not try to train it on unstructured textual data. It aims to create structured requirements from the get-go, or to convert unstructured data to structured requirements as an intermediary.

This is where modelled requirements can empower the creation of AI-ready requirements for training language models. We can use models to structure and improve existing data, integrating the existing SDLC artifacts with models in a central knowledge hub.

What is Modelling for Software Requirements?

Model-Based Testing uses a model of the feature under consideration to generate test cases. These models are usually represented as visual flowcharts, which clearly define the software requirements by depicting the system's behaviour, functions, or operations.

By using such models, ambiguity can be reduced, making it easier for both developers and testers to understand the requirements. Moreover, modelling facilitates automatic generation of test cases, data, and automation. Any changes in the software requirements can be reflected by altering the models, leading to dynamic and updated test assets.

Flowcharts offer a visual method of presenting complex processes in a simple and understandable manner. They show each step as a box or symbol, with arrows between them showing the flow of users and data through the process. This gives a clear, easy-to-follow representation of the process, highlighting the sequence of operations and the decision-making points:

A model which shows the results from a prompt

Typically, an organisation who is leveraging model-based testing will create a library of reusable models, which can then be connected to build much richer end-to-end scenarios at speed. The process of componentisation allows team to move much faster in creating their test assets, but also react to change in a streamlined approach:

a model containing sub-flows

Flowcharts are Structured Data for LLMs

Flowcharts are inherently structured by nature. When it comes to training Large Language Models, structured data in the form of flowcharts can produce extremely empowering results.

Moreover, flowcharts are ideal for querying because they are intuitive and simple to interpret. The flow of control and decision-making process are clearly depicted, making it easy for the model to predict the next steps in the process.

The ability to infer relationships between the various steps makes flowcharts particularly valuable for both querying and training of LLMs. Here is an example of using an LLM to query a simple model. The model below depicts a student checking account application process:

The model below depicts a student checking account application process

Prompt: How old does a student have to be to get approved for a student checking account?

Response: 18 years or above.

Prompt: How can a student get approved for a student checking account?

Response: To be approved for a student checking account, a student would first need to input their personal details. This would be followed by verifying their age to ensure they are 18 years or older. Once their age is confirmed to be 18 or over, the student must then input proof of enrollment to prove their student status. If all these criteria are met and the application is successful, the student will be approved for a student checking account.

Conversing with Software Requirements

Given the proven ability of LLMs to query and reason on flowcharts (models), and the range of connectors Curiosity have built into DevOps tools, we sought to apply LLMs to flowcharts and DevOps artifacts. We sought to combine models with data from across disparate DevOps artifacts, creating a central knowledge hub for LLMs.

We have applied and trained LLMs on an array of software requirements (captured from JIRA), and models stored in an organisations Test Modeller workspace. For this project we have a JIRA project with a series of tasks for a banking application, along with models which have been created for the textual based issues. These models overlay additional structure and complete the requirements, creating the data needed by an LLM.

Jira tickets generated by Test Modeller

Here is an example ticket;

Here is an example ticket

If you compare the “wall of text” user story above, to the visual requirement below, you’ll see how much easier the flowchart makes it to understand the logic of applying for a credit card. This is because of the inherent structure and visual nature of a flowchart.

A model in Test Modeller

The collection of flowcharts and Jira tickets synchronised in the Test Modeller workspace provides a knowledge hub for the LLM. When queried, the LLM can therefore leverage and reason on information stored across otherwise-disparate SDLC artifacts, for instance data stored in multiple Jira tickets.

Let’s look at some examples querying this knowledge hub, using an LLM trained on the example banking software requirements and models.

Example 1- Simple Question

Let’s start firstly with a simple question which we would expect it to perform well on. Give me the details of a JIRA ticket given it’s identifier.

Prompt: What is JIRA Ticket CB-13?

We’ll see this comes back with a summary of the ticket:

summary of the ticket

Example 2 – Implied Reasoning with a model

In this example, we’ll go a little bit deeper and ask a question which expects the LLM to understand a flowchart and then deduce an answer from it. Specifically, about the credit card application process.

Prompt: What credit score is required to complete a credit card application?

summary of the ticket

The LLM has captured the flowchart for a credit card application process and interpreted the model. It has then used this interpretation to calculate the required credit score of 700 or above.

Example 3 – Implied Reasoning with models and requirements

This prompt requires the LLM to interrupt multiple sources of information to answer the question. It looks up a model and also the corresponding requirement.

Prompt: When can a customer apply for a credit card?

summary of the ticket:

Example 4 – Multi-requirement Reasoning

This prompt requires multiple requirements to be understood and reasoned to answer a cross-requirement query. We’ll see in the response that 3 user stories are referenced to answer that the products are only available if an individual has good credit score.

Prompt: What products can I apply for if I have good credit?

summary of the ticket:

Demo: Using LLMs to Query Structured Flowchart Data

Watch me synchronise information from Jira user stories into a central knowledge hub and run the queries used when writing this article:

Adopt LLMs for Better Software Delivery

Curiosity Software is leveraging Large Language Models (LLMs) such as OpenAI's GPT-4 to better understand and manage software requirements and business logic, with a particular focus on combating technical debt.

Given that LLMs thrive on structured data, model-based testing is the perfect tool for completing and removing ambiguity in unstructured data. The models provide a source of structured business flows, using visual flowcharts to represent software requirements, which in turn provide clarity. At the same time, we can synchronise information from DevOps tools and artifacts in a central knowledge hub.

This approach also enables the automatic generation of test cases, data, and automation. Using these methods, Curiosity Software is actively working on training LLMs on a broad spectrum of software requirements captured from various DevOps tools, which are modelled in an organization's Test Modeller workspace. This creates a co-pilot and dashboards which provide explanations of the whole SDLC when queried, while informing decisions around risk, releases, test coverage, compliance and more:

overview graphic of how LLM works within Test Modeller

We can even generate models using Generative AI, as described in my last article. This closes the feedback loop. A human can work with a Generative AI to create and iteratively improve models based on data in flowcharts and broader SDLC artifacts. The flows in turn provide accurate specifications for developers, while generating tests to verify the code they create.

The resultant data from this AI-assisted software design, testing and development is fed into our central knowledge hub. This updates the LLM, informs future iterations, and avoids technical debt.

This application of AI to software requirements can help improve the efficiency and effectiveness of software development processes, act as a knowledge hub for an organisation’s business process, and finally combat technical debt.

Want to learn more about how I’m using LLMs and Generative AI to accelerate software delivery? Join me live on July 13th for “AI in Testing: A Panel Discussion”.

Watch Now

How Model-Based Testing Fulfils The promise of AI Testing

How Model-Based Testing Fulfils The promise of AI Testing

There is no longer any doubt in the industry that test automation is beneficial to development; in fact, more than half of development teams have...

Read More
Experiments with GPT Vision for Modelling: A Journey from Screenshots to Whiteboards

Experiments with GPT Vision for Modelling: A Journey from Screenshots to Whiteboards

The landscape of artificial intelligence is rapidly evolving. The recent announcement of GPT-4 with vision capabilities by OpenAI stands as a...

Read More
Containers for Continuous Testing

Containers for Continuous Testing

Application development and testing has been revolutionised in the past several years with artifact and package repositories, enabling delivery of...

Read More
5 Reasons to Model During QA, Part 5/5

5 Reasons to Model During QA, Part 5/5

Welcome to the final instalment of 5 Reasons to Model During QA! If you have missed any of the previous four articles, jump back in to find out how...

Read More
Bringing Clarity to Complexity: Visual Models in Requirements Engineering

Bringing Clarity to Complexity: Visual Models in Requirements Engineering

In the dynamic, interconnected world of software development, clarity is key. Yet, requirements engineering - the process of defining, documenting,...

Read More
Shift Left Quality With Curiosity's Modeller

Shift Left Quality With Curiosity's Modeller

Software delivery teams across the industry have embraced agile delivery methods in order to promote collaboration between teams and deliver new...

Read More
5 Reasons to Model During QA, Part 2/5: Automated Test Generation

5 Reasons to Model During QA, Part 2/5: Automated Test Generation

Welcome to part 2/5 of 5 Reasons to Model During QA! Part one of this series discussed how formal modelling enables “shift left” QA. It discussed how...

Read More
Using Model-Based Testing to Generate Rigorous Automated Tests

Using Model-Based Testing to Generate Rigorous Automated Tests

Despite increasing investment in test automation, many organisations today are yet to overcome the barrier to successful automated testing. In fact,...

Read More
Automated Test Case Design is Key for CI/CD

Automated Test Case Design is Key for CI/CD

Continuous Integration (CI) and Continuous Delivery or Continuous Deployment (CD) pipelines have been largely adopted across the software development...

Read More