ICAEW.com works better with JavaScript enabled.

What is AI assurance in practice?

Author: ICAEW Insights

Published: 02 Jul 2025

At ICAEW’s first-ever AI Assurance Conference, leading authorities on the nascent discipline shed much-needed light on its foundations and fundamentals.

In the assessment of Professor Lukasz Szpruch, there is currently no fixed definition of artificial intelligence (AI) assurance. However, there is a useful starting point for thinking about its drive and focus. 

Broadly, he describes AI assurance as an evidence-based process of evaluating AI against declared objectives and legal, technical and ethical requirements. The aim is to build justifiable confidence in an AI tool’s trustworthiness, safety and responsible operation throughout its lifecycle.

Szpruch, the Programme Director for Finance and Economics at the Alan Turing Institute, was speaking at ICAEW’s first-ever AI Assurance Conference in an early panel discussion on a critical question: What is AI assurance?

Chaired by ICAEW Head of Tech Policy Esther Mallowah, the talk gave Szpruch and three other experts an opportunity to delve deeper into the fundamentals of this emerging discipline and how accountants and auditors will need to engage with them.

Context is key

Mallowah pointed out that in its dictionary definition, assurance has one meaning linked to promise, and another to confidence. Similarly, Deloitte Partner, Algorithm and AI Assurance, Mark Cankett focused on how the word relates to comfort. “Across the AI ecosystem, there will be a variety of stakeholders who’ll have need for that comfort,” he said. “That includes those who develop AI, those who deploy AI, those who provide datasets for AI systems and those who are subject to the outputs of AI.”

Compared to traditional assurance topics, AI is much less settled, Cankett noted, with startups creating AI solutions, building on top of foundation models, and companies sitting on large, private datasets. Innovators are also experimenting with AI to boost productivity and explore commercial opportunities. “In finance, there’s been over 200 years of developing an architecture for delivering consistent, compatible approaches to assurance. As such, there’s far more clarity around stakeholder needs.”

At a practical level, Cankett believes that AI assurance will support professionals’ efforts to answer a series of critical questions. For example, is the system at hand operating as intended, in line with the principles of its design? Is it making use of datasets that are fully representative of the AI’s target stakeholder population? Are the controls that sit around the system appropriately designed to mitigate risks associated with the AI in certain contexts? And is the technical documentation created to support the system’s scalability and performance sufficient, accurate and auditable?

“I think of AI assurance as a flexible tool that can address a variety of concerns or outcomes,” said Cankett.

For Szpruch, another keyword looming over the adoption and use of AI tools is auditability. “If a vendor offers you an AI system and says it works correctly and is fair and unbiased, you must have reasons to trust that,” he said. 

Importantly, Szpruch stressed, context is key. “You cannot evaluate your system without having absolute clarity about the exact use case in which its algorithm will be deployed,” he said. “That informs what you test and how you test it. If you look at OpenAI’s system cards, the performance evaluations they recommend are very generic. Useful first steps, yes – but you must be crystal clear on specifics. You must be able to evaluate multiple metrics in your environment to assess the benefits versus risks of using the system for its chosen purpose.”

The British Standards Institution (BSI) is one body keeping a close eye on this nascent field. Tim McGarr, AI Market Development Lead in BSI’s Regulatory Services division, pointed out that instead of assurance, the body typically uses the phrase ‘conformity assessment’.

“That breaks down into two main categories,” he explained. “First, there’s testing, which covers products – for example, medical devices – and second, certification, which covers the organisation. On the latter point, most notably for this audience, we’re already certifying organisations against ISO 42001: a management system standard that governs an entity’s entire approach to AI. It’s the most widely used AI standard and takes a similar approach to other management system standards, such as ISO 27001 for cyber security.”

McGarr added that another activity relevant to understanding assurance is accreditation. “For us, that means a system for checking the quality of testing and certification provided by bodies such as BSI. In other words, assessors such as the UK Accreditation Service check that organisations like us are doing the right thing.”

Diagnostic tool

As Co-founder and CEO of Holistic AI – a market-leading AI governance platform for enterprise – Dr Emre Kazim hatched early forms of AI assurance in his academic career at University College London. Essentially, Kazim and his colleagues sought ways to audit algorithms with the same level of seriousness that accountants audit finance. Along the way, they explored and fleshed out key concepts such as explainability, robustness, privacy and bias, before coming to assurance as an umbrella term.

That work fed into a landmark paper of December 2021 from the Centre for Data Ethics and Innovation, titled The Roadmap to an Effective AI Assurance Ecosystem. “To us, assurance worked like a medical diagnostic,” said Kazim. “You want to be able to say, ‘There were some risks – but we took remedial actions, and now we’re confident the patient is trustworthy.’”

Kazim explained that Holistic AI breaks down the assurance process into three layers. “Our First Order assessment is highly technical,” he explained. “Second Order is more about controls, documentation and building a taxonomy of responsibility, accountability, reporting and escalation. Then Third Order is where the company says, ‘What’s our risk posture? Do we really want to do this?’ And we think of those layers as interacting with each other.”

In its Second Order assessment, the platform typically evaluates business risks around areas such as compliance and standards, plus potential reputational and financial harms.

It fell to Cankett to provide perhaps the most encouraging message about this complex process: that accountants don’t need to reinvent the wheel to get started.

“As professionals, we’ve developed a view of what assurance is,” he said. “That ties in with a range of global standards, incorporating other considerations such as quality control, ethics, independence and conflicts of interest. Right now, there’s room for growth to meet the wide range of stakeholder needs for AI assurance. But I’d say that a huge amount of what has already been built can be leveraged and developed further, enabling the industry to move forward.”

Accounting Intelligence

This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.
Support on AI
Computer screen with text relating to generative AI

You may also be interested in

Webinar
Middle-aged white man studying laptop screen and taking notes
AI in audit: opportunities

RSM's Konrad Bukowski-Kruszyna and Gilber Accountant's Luke Parker look at the transformative power of artificial intelligence in the auditing profession, the challenges and opportunities..

Book now
Support
Generative AI Guide

Explore the possibilities of generative AI in accounting through ICAEW's guide to Gen AI, covering use cases, risks, limitations, ethics and how to get started.

Read more
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon