As Insights reported in March, senior accountants who attended an ICAEW round table on issues linked to artificial intelligence (AI) were concerned that the profession is currently wrestling with a skills gap around the technology. But what are the challenges when we turn to the far more specialist – and relatively nascent – discipline of AI assurance?
Those hurdles, and how to overcome them, were the subjects of a fascinating panel at ICAEW’s first-ever AI Assurance Conference, held at Chartered Accountants’ Hall on 19 May.
Pooling the expertise of senior figures from the computing and governance worlds, as well as the profession itself, the discussion tackled critical questions of what an AI assurance skillset should look like, and where accountants and auditors can find its building blocks.
Three pillars
Focusing on the first question around which skills are required, Dr Cari Miller – Founder and Head of AI Governance and Research at the US-based Center for Inclusive Change – said that, broadly, the skillset is rooted in three, main pillars of assessment:
- Oversight A general examination of the processes and policies an organisation has established around its AI tools and services, and how it is implementing them.
- Performance A contextual and statistical study of the AI system’s robustness and accuracy, which would tend to involve extensive mathematical analysis.
- Design Understanding data flow and traceability, and where gaps may exist in the system – particularly important for assessing agentic AI.
“Those areas guide where your skills will need to develop,” Miller said. “For example, if you’re in the performance arena, you’ll need to be a data scientist – but if you’re more governance based, you should be, say, an industrial-organisational psychologist.”
However, there are two, further layers for professionals to consider – the most crucial one being the use case of the technology they are called on to assess. “How we evaluate autonomous vehicles will be very different to how we might examine HR systems or cancer-detection tools,” Miller noted.
The second layer, meanwhile, is foundational versus specialist skills. “In a foundational assessment, you would typically know how to develop a targeted evaluation and understand that you should not have conflicts of interest,” Miller said. “But going a step further, there’s a need for deeper, technical insights on different types of AI and how they work.”
Setting the tone
So, that’s the essential framework that will determine the shape of AI assurance skillsets. But from which sources will professionals obtain the relevant learnings?
Miller drew on her own upskilling journey. “There’s lots of education around, providing foundational information about AI governance,” she said. “For example, there are standards such as ISO 42001 and the NIST AI Risk Management standard. Plus, there are lots of individual experts who speak on AI governance and assurance matters.” Miller is also tracking the thought-leadership work of software providers in the governance, risk and compliance (GRC) field. “For me, GRC is a new space,” she said, “So, it’s about following the platforms on LinkedIn, going to their product demos and listening to how they are positioning and differentiating themselves.”
Acknowledging the disparate nature of the various learning sources, Miller highlighted the value of a proactive approach. “There’s no university course for this,” she said. “It’s all self-driven, self-paced learning.”
Dan Howl – Head of Policy and Public Affairs at BCS, the Chartered Institute for IT – suggested that professional bodies and global skills initiatives have valuable roles to play. “For us, anyone working in IT, especially senior leaders, must be able to demonstrate high levels of professionalism, competence and accountability,” he said. “That must hold true in AI assurance. We look at this through the knowledge that underpins skills and competence – for example, a basic grasp of statistics and Bayesian probability. It’s about ensuring that professionals understand what AI models can and can’t do.”
On the global initiatives front, Howl cited the Skills Framework for the Information Age (SFIA – pronounced ‘Sophia’) as particularly useful, as it is not just regularly updated, but industry led. Indeed, BCS runs a comprehensive skills track based on the framework, called SFIAplus. “It’s important for frameworks such as SFIA to be consistent and international,” Howl said. “In this space, global interoperability is vital.”
In Miller’s assessment, the EU AI Act will set the “highest bar” for professionalism in AI assurance, which will have a strong influence on skills development. In the meantime, though, standards are already setting the tone for credibility. “ISO 42001 is being pervasively adopted,” she said. “So, certifying bodies such as the American National Standards Institute are now able to ensure that someone like me can be certified as a 42001 lead implementer and auditor. That tells organisations that I understand the importance of avoiding conflicts of interest, can develop a targeted evaluation and know which controls to test.”
Hands-on experience
From an accounting industry perspective, EY Data, Technology and Innovation Partner and Assurance Specialist Gareth James explained that his firm has instated a scheme called Client Zero to build internal capacity in key aspects of AI. Based on a programme of self-disruption, Client Zero enables the firm to experiment with different AI concepts on its own internal systems, before rolling them out to clients.
“For someone to assure an AI system, it’s important that, at a minimum, they’ve been hands-on as a system user – but also as someone who’s been involved with building systems, too,” he said. “In Client Zero, that means that people who aren’t necessarily at the coal face of AI development still gain experience of what’s involved in assembling an AI solution, plus an understanding of the governance frameworks around it.”
James noted that in terms of scale, EY coordinates everything from large-scale AI tools released to hundreds of countries to discreet concept solutions hatched in three-person innovation teams. “Whether it’s huge or tiny applications, it’s all about assessing risks in the system you’re working on and ensuring the relevant teams have the correct skills across the required specialisms,” he said. “Our global EY Badges programme offers a path of Bronze-to-Platinum credentials that enable our teams to self-serve and train themselves. Plus, many of our people invest in their own education at institutes and universities.”
Miller was keen to stress that AI assurance will be a “team sport”. She and James agreed that the discipline will ultimately be shaped by a blend of data scientists, statisticians, technology gurus, lawyers, ethicists and experts on key sectors. Even so, Howl suggested that all those groups could professionalise under one roof – a move that may support skills development. “There are questions around how you become accountable to your employer, government, regulators and the public,” he said. “Membership of a special ‘Chartered Institute of AI Assurance’ may not be the only answer – but it could certainly contribute.”
For Miller, given its intense focus on risk, one industry in particular could be “a real game changer” for skills: insurance. “Amid rapid AI adoption, the sector is well placed to gain a greater understanding of gaps in governance and assurance,” she said. “So it could tell organisations: ‘We’re going to put some exclusions in your policy if you don’t do X, Y and Z.’ Indeed, it may demand types of assurance that could push the market in unexpected ways. That would certainly expedite our need for skills development.”
Real-world AI Insights
ICAEW's Annual Conference 2025 includes sessions covering how AI is already being used and how to address the challenges of implementation.