ICAEW.com works better with JavaScript enabled.
Exclusive

How IA should approach providing assurance over AI

Author:

Published: 10 Feb 2026

Exclusive content
Access to our exclusive resources is for specific groups of students, subscribers and members.
Artificial Intelligence is no longer a future consideration for internal audit. Across the UK, AI is already embedded in everyday processes, from data analysis and forecasting to customer interaction and decision support.

As adoption accelerates, boards and audit committees are increasingly asking a simple but challenging question: how should internal audit provide assurance over AI?

The instinctive reaction is often to focus on the technology itself, algorithms, models or specialist tools. In practice, most AI risk does not sit in the code. It sits in governance, decision-making, accountability and human oversight. Internal audit’s role is not to become a technical AI function, but to provide confidence that AI is being used responsibly, ethically and in line with organisational objectives.

Start with governance, not technology

The most effective audits of AI begin with governance. Without clarity at this level, technical controls offer limited assurance.

Key questions for internal audit include:

  • Is there a clear AI strategy aligned to organisational goals?
  • Who owns AI risk at executive and board level?
  • Are roles and responsibilities for AI development, use and oversight clearly defined?
  • Has the organisation articulated its risk appetite for AI use?

Where organisations struggle with AI risk, it is rarely because the technology is too advanced. It is usually because ownership, accountability and decision rights have not kept pace with adoption.

Understand how AI is actually being used

A common pitfall is auditing AI in theory rather than in practice. Many organisations have formal AI initiatives, but also widespread informal or “shadow” use of generative tools by staff.

Internal audit should focus on:

  • Where AI is being used today, including unofficial or unsanctioned tools
  • What decisions AI informs or influences
  • Wheter AI outputs are advisory or determinative
  • How outputs are reviewed, challenged and approved. This approach helps distinguish low-risk productivity use cases, such as drafting or summarisation, from higher-risk applications that influence pricing, eligibility, prioritisation or compliance decisions.

Focus on data, judgement and accountability

Across sectors, three risk areas consistently matter most.

Data quality and integrity

AI outputs are only as reliable as the data they are trained on and fed with. Internal audit should consider data sources, completeness, bias, access controls and ongoing maintenance.

Human judgement and oversight

AI should support decision-making, not replace it. Audits should assess whether there is meaningful human review, appropriate challenge and clear escalation routes when outputs appear incorrect or inappropriate.

Accountability

Accountability must sit with people, not systems. Internal audit should be able to trace AI-influenced decisions back to named roles with authority and responsibility.

Over 60 percent of organisations using AI, report that accountability for AI-driven decisions is not clearly documented. This represents a governance gap rather than a technical one.

Policies matter, but behaviour matters more

Many organisations are developing AI policies and ethical principles. These are important, but insufficient on their own.
Internal audit should test:

  • Whether staff understand acceptable and unacceptable use
  • Whether training is practical, role-specific and kept up to date
  • Whether behaviours align with stated principles

AI policies are a starting point, not a control in themselves. Internal audit adds real value when it tests how AI is actually used day to day, not just how it is described on paper.

A well-written policy that is ignored in practice does not reduce risk.

Aligning AI assurance with the Global Internal Audit Standards

The Global Internal Audit Standards do not require internal audit to provide technical validation of AI models. They do require internal audit to assess whether significant risks are identified, managed and reported appropriately, including technology risk.

Auditing AI therefore fits squarely within existing assurance over:

  • Governance
  • Risk management
  • Internal control
  • Culture and ethics

AI should be treated as another source of risk, albeit a fast-evolving one, rather than as a separate or exceptional discipline.

A proportionate, risk-based approach

Strong AI assurance is proportionate, risk-based and iterative. It evolves as AI use matures and focuses on what matters most: confidence that AI strengthens decision-making rather than undermining it.

Internal audit adds the greatest value when it helps boards understand not just whether AI is being used, but how safely, with what oversight and with what consequences if things go wrong.

As AI becomes embedded in everyday operations, the question for organisations is no longer whether they use AI, but whether they govern it well. Internal audit is uniquely placed to provide that assurance by focusing on ownership, oversight and behaviour, rather than chasing technical complexity.