Part of the Quality Horizon 2024
Testing AI, AI-Driven Systems, and ML Pipelines
Hosted by Curiosity, Katalon, OctoPerf, WireMock and XRay
In this talk, we explore the tools used in AI Research to understand and test AIs themselves, as well as systems that integrate AI, and Learning Pipelines - and how we can leverage them!
REGISTER FOR INSTANT ACCESS!
Artificial Intelligence Systems Need Testing!
Artificial Intelligence has become an important tool and topic for accelerating testing and quality efforts. However, as more of the systems and applications we are responsible for integrating our systems with AI tools, how do we ensure the quality of AI infused into them? How do we expand our testing and quality practices to cover AIs and the associated applications themselves?
Integrating smart tools we don’t fully control is a challenge, how can we build our applications to be as resilient as possible in the face of this challenge?
The classical problem with AI is that we don’t necessarily have full knowledge of the expected results, often they are our best answer, so evaluating them can be challenging, and they’re certainly prone to hallucinations and other problems like glitch tokens. Even more urgently, integrating external LLMs has consistency challenges all of its own.
There are things we can do though!
In this talk, we will explore the tools used in AI Research to understand and test AIs themselves, as well as systems that integrate AI, and Learning Pipelines - and how we can leverage them! As well as traditional if niche tools such as property based testing.
Fuzzing, adversarial testing, GANs, simulated data & statistical tests are all techniques we will consider. We will also talk about how we can maximize consistency when we ultimately don’t control the quality & availability of the LLMs directly.
The way we build applications is changing, it’s time to be ready for how we ensure their quality, too!
Meet Curiosity's Speaker
Ben Johnson-Ward, Lead Solutions Engineer at Curiosity, has spent the past 12 years pioneering testing tools and techniques for global banks, retailers, insurance companies, telcos and beyond. He has occupied many of the roles associated with “quality”, including developer, product owner, product manager, automation engineer and tester. Ben has often gravitated towards model-based testing and test data. He has worked as a product manager and consultant of tools that have been used to create and optimize tests for many different technologies and projects. Ben has focused on the use of Generative AI for testing, serving as a product manager and services engineer for multiple tools. He has explored the fringe possibilities and disruptive capabilities of AI, alongside techniques which are emerging as enterprise-ready.