ICAEW.com works better with JavaScript enabled.

How assurance helps to support AI adoption

Author: ICAEW Insights

Published: 22 Oct 2025

Ongoing evaluation of systems and transparent collaboration between users and vendors can help to create confidence for driving AI uptake, according to an expert panel.

Developing a market for AI assurance is a major focus for the UK’s Department for Innovation, Science and Technology. Its Trusted Third-Party AI Assurance Roadmap lays out a path to develop this. 

It is part of the government’s wider plans to encourage greater innovation and adoption within the AI space. Reliable AI assurance could encourage that adoption.

The phrase ‘the better the brakes, the faster you can go’ can be applied to the key role that high-quality assurance can play in stimulating uptake of AI technology. By ensuring that scrupulous checks are in place across the product management lifecycle, assurance can help stakeholders feel more comfortable with – and enthusiastic about – onboarding AI solutions.

The phrase was also quoted by Giles Pavey, Unilever Global Director – Data Science, when he outlined how his company conducts quality vetting of AI in a multinational context at ICAEW’s AI Assurance Conference. While not directly analogous to the type of AI assurance work that auditors would undertake, Unilever’s efforts nonetheless chime with basic features of the challenges that auditors are facing.

Whole-system approach

Pavey sketched out the rapid pace of Unilever’s AI adoption. When the company began its monitoring and quality assurance work around the technology two years ago, it had around 50 AI systems. Now, it has hundreds, and anticipates that in the next year or two, every piece of its IT infrastructure will have AI in it somewhere.

“In the same way that we think about cyber across our IT estate, we need to think about assurance across our AI estate,” he said.

Pavey explained that Unilever takes a whole-system approach to AI assurance. To that end, it evaluates its tools on three risk levels. First comes reputational risk. For example, if Unilever has an AI tool that recommends a skin-care regime to a customer, the outcome must enhance, not harm, the company’s reputation. Then it turns to safety risk. If an AI tool is, say, monitoring employees’ access to a factory, is it effectively protecting them?

Thirdly, it looks at material financial risk. “There’s an AI system that dictates what we’ll make in our factories, and tomorrow, we’ll make 200m things,” Pavey said. “If they’re 200m wrong things, that would have a material financial impact.”

For an impartial view on the efficacy and fairness of its systems, Unilever also works with Holistic AI. Based on its assessments, the business communicates the risk status of its tools via a simple traffic-light ranking. Pavey noted: “Any tool that has potential to trigger any of our three risk areas will be classified as red, until our AI assurance process, whether through mitigation or further analysis, can demonstrate that it’s amber or green.”

While that ranking plays a major role in generating confidence, Pavey added: “It’s important to understand that, in most cases, you can’t mitigate away all of the risks associated with AI. It’s a constant business risk. If you implement a solution for, say, recommending skin cream, you must have strong layers of responsibilities for managing that tool’s risks.”

Full transparency

In addition to Pavey’s case study, two senior professionals explained how their work has taken them deeper into the AI world. They considered where in the AI value chain the brakes for asking critical questions should be placed.

Both suggested that the best advertisements for AI adoption are thorough monitoring, plus effective upstream and downstream project management between end users and vendors.

EY Partner Dr Frank De Jonghe, leader of the firm’s Trusted AI initiative, urged end users to evaluate the effect that their chosen AI tool is having on key business decisions. “In terms of outputs, what you should care about most are false positives and negatives,” he said. “So, collecting data on those is a point we can all start from.”

Following that, he noted, users can ask such questions as: is our use case clearly defined? How well is the tool performing against it? And how effective are its data privacy provisions?

“Next comes third-party risk,” De Jonghe said. “Gone are the days when you had full control. So, knowing the handover points in your value chain – who takes care of which risks, and at what stage – is essential. When we talk about audit, it’s about understanding, at the end-user level, your responsibility, and what sorts of guarantees you’ve received from your vendor upstream. Have those guarantees been articulated clearly enough so you can build your downstream case and understand those points of accountability?”

Rachel Kirkham, MindBridge Chief Technology Officer, said that AI providers must do more to build trust. “We get our algorithms audited on an annual cycle,” she explained. “We provide AI to a highly regulated market, and we found that some clients in audit didn’t have the experience to understand what our tools were doing.”

In the interests of full transparency, specialist governance platform Holistic AI assesses MindBridge’s algorithms for privacy, explainability, robustness, resilience and bias. “We now go through that process when we’re building new models, too,” Kirkham said. “So, we’re always resetting as we develop new capabilities. Questions we explore include: how is this model going to fit within our system? How are our users going to deploy it? How could it be applied to different use cases? And do we need to get additional assurance over those?”

De Jonghe stressed that technical tools for monitoring risks in AI solutions and establishing relevant guardrails already exist in the underlying systems, and should be considered and implemented at the outset of any development process.

“There are numerous log files on which you can build statistics and audit trails,” he said. “That enables you to bake in assurance by design, so you’re already identifying metrics you’ll need to gather during runtime, and methods for collecting and monitoring that data. From that perspective, monitoring dashboards are vital.”

Importantly, De Jonghe noted, developers can take advantage of publicly available risk taxonomies. “There’s an MIT database that contains at least 700 risks. So, there’s no need to reinvent the wheel. The tools to put metrics against those risks are there. So, let’s look at our use case, map out the risks and identify those that are highest priority, because they are where we’ll lose or destroy value if we don’t mitigate.”

Accounting Intelligence

ICAEW has created a suite of resources to support members in building their understanding of AI, including opportunities and challenges.
Browse resources Masterclass videos
Cut out of laptop device with generative AI prompt screen.

You may also be interested in

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
elearning
GenAI Accelerator

Gain the skills to harness the power of GenAI, transforming the way you work and positioning yourself as a leader in the industry. Don't just keep up with change - drive it.

Find out more Enrol
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon