Explore Curiosity's Platform Use Cases

Our innovative quality solutions help you deliver rigorously tested software earlier, and at less cost!

Test Modeller                    Use Cases

Explore a range of Test Modeller use cases and solutions!

Helpful Resources

Discover a range of helpful resources for getting started with our solutions!

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, explore our newest eBooks, read our blogs, and more.

Latest Resources

Explore a wide range of the latest resources from the Curiosity team!

Customer Success Stories

Learn how our customers use our tools to achieve success!

Help & Support

Explore the helpful links below if you need any help.

Creating Quality Since 1995

Curiosity are your partners when designing, building and rigorously testing complex systems in short sprints!

Get To Know Curiosity

Explore our history, learn about our team and the latest Curiosity news!

Customer Success Stories

Learn how our customers use our tools to achieve success!

Connect With Us

Reach out to our team or keep up to date with our latest news!

Hosted by Curiosity, Dragonfly and The BCS SIGiST

AI in Testing: A Panel Discussion

How will AI shape quality engineering? What role will engineers play as testing becomes increasingly “autonomous”? And what measures must organisations take to avoid legislative risks, negative bias, and barriers to diversity and inclusion? Register to find out!



How will AI Shape Testing?

In the 2 months since Curiosity’s last live discussion of AI and quality, AI-augmented testing has evolved leaps and bounds. There are new tools and approaches, fresh organizational worries, and renewed concerns for diversity and inclusion.

This live panel discussion will help you identify the unprecedented benefits of AI for your software testing, while flagging key risks for software quality, human equality, and legislative compliance.

It will gather four testing experts, bringing 80+ years’ software delivery experience, and expertise in AI regulation, diversity in AI, model-based testing and generative AI. They will explore how you can use AI to accelerate your testing and pay off technical debt, while implementing Artificial Intelligence in a fair, legal, and equitable manner.


Your Questions on AI and Testing Answered

Speaker Group Cutout

Rich Jordan, a DevOps transformation specialist, will first discuss how you can provide AI with sufficient domain understanding. He will consider how you can uncover and structure sufficient data to teach an AI organizational context, processes and system rules, avoiding a "garbage in, garbage out" situation in the application of AI. 

James Walker, a model-based testing pioneer, will then showcase his work in integrating generative AI with model-based testing. He will discuss how AI-augmented models help retain the coverage, observability and governance needed to assure quality when testing with AI.

Nicola L. Martin, a Head of Quality Engineering and leading thinker in AI diversity, will next consider the risks posed by AI for inclusion. This will cover the key role that testing should play on ethics committees, and how QA must test far earlier to ensure that software respects diversity and inclusion.

Adam Leon Smith, an expert in AI regulation and BCS Chair, will finish by highlighting legislation that you must consider when setting up a Large Language Model (LLM). He will flag regulations that your AI strategy should prepare for today, to ensure that you can reap the future benefits of AI for testing.

The live session will finish with an extended Q&A, so please come with your questions for this panel of AI and testing experts!


Meet The Speakers

Nicola Martin Headshot

Nicola Martin has been involved in the tech industry for just over 20 years. She is passionate about increasing diversity and inclusion in software engineering.

Nicola is a Council and Committee Member for the British Computer Society and WomenTech Global Network. She is also a mentor, speaker and panellist speaking on subjects such as AI and quality, software testing and diversity in tech.

She has been included in the Computer Weekly Most Influential Women in UK Tech List in 2022 and nominated in the UKtech50 long list for the most influential people in UK Tech.

Adam Leon Smith Headshot

Adam Leon Smith is the CTO of Dragonfly, Chair of the BCS Fellows Technical Advisory Group, and a member of the BCS Special Interest Group in Software Testing. He is the lead author of AI and Software Testing - Building Systems You Can Trust, and is highly active in research around AI quality, testing and risk management. He has led ISO/IEC projects on AI bias, and AI testing/quality.

James Walker Headshot

James Walker holds a PhD in data visualisation and machine learning, in the field of visual analytics. He has given talks world-wide on the application of visual analytics and has several articles in high impact journals. He has since applied these approaches to testing, inventing several Model-Based Testing and Test Data Management solutions. He is the co-founder and CTO of Curiosity Software, where he works with a range of organisations to identify and resolve their QA needs.

Rich Jordan Headshot

Rich Jordan is an Enterprise Solutions Architect at Curiosity Software and has spent the past 20 years within the Testing Industry – mostly in Financial Services, leading teams creating test capabilities that have won multiple awards in Testing and DevOps categories. Rich has been an advocate of Test Modelling and Test Data for over a decade and joined Curiosity in Nov 2022!


Speak with an expert

Want to learn more about the future role of AI in testing? Speak with a Curiosity expert today.

Schedule a Demo