Key takeaways:
- The FRC has issued a second piece of guidance on AI adoption for auditors – this time on the risks of adoption.
- It breaks down the risks into three categories.
- Auditors should start with the illustrative examples as the document needs time to digest.
In a world first for audit regulators, the Financial Reporting Council (FRC) has published specific guidance on generative artificial intelligence (Gen AI) and agentic AI.
Issued on 30 March, Generative and Agentic AI Guidance: Risks, Mitigations and Illustrative Examples is a follow-up to the regulator’s AI in Audit publication of June 2025. While that paper provided a foundational overview of how auditors can implement AI, the new guidance takes a more detailed look at audit risk and AI. It is primarily aimed at system managers at audit firms.
Prefer to listen?
This audio file was produced by AI and has been adapted from the original article for audio purposes.
According to an FRC statement, the guidance serves a dual purpose: advising firms on how to mitigate audit quality risks that could stem from Gen AI and agentic AI, while supporting them to capture the tools’ benefits – including improvements to the quality and efficiency of audit work.
That’s the FRC’s perspective, but does it hold up? Experts share their thoughts.
A good place to start for audit firms
“The guidance examines – quite helpfully – three categories of risk involved with using these types of AI in audit,” says Daniel Clark, Chair of ICAEW’s Technology Faculty Board. Those risk categories, he explains, are:
- AI tools producing outputs that are either deficient or incorrect;
- outputs being correct, but somehow misinterpreted or misused; and
- tools being used in ways that fail to comply with either the firm’s audit methodology or relevant regulations.
“The guidance provides some detail around each of those risks and then sets out the sorts of mitigations that firms may want to put in place around them,” Clark says. “That’s a useful way to frame the guidance, because it gives firms a place to start.”
Misinterpretation or misuse will take the most work for audit firms
For Clark, the second risk category is particularly tricky. “The FRC stresses that firms must address misinterpretation or misuse through staff guidance and training,” he says. “That has layers to it. First, there are the practicalities of how to use the software. But crucially, staff also need training on how to exercise appropriate levels of professional judgement and scepticism over AI outputs.”
On that theme, one of the guide’s illustrative examples looks at how an auditor may use AI to check contracts, to ensure that revenue is correctly recognised. “In that case, the AI has provided useful assistance,” Clark says. “But as an auditor, you can’t then assume that your contract checking is done. You still need to apply your judgement and take a view.”
As such, Clark welcomes the guide’s reference to the psychological term ‘automation bias’: the human tendency to believe that machine-made outputs are correct. However, that leads to his first caveat. “I was disappointed to find this material buried in just a single paragraph, on Page 35,” he notes. “Automation bias poses risks around what will happen to professional judgement in the long term.”
AI tools are very confident in how they present their outputs – even when they’re wrong, he explains. That means auditors must be equally confident in challenging them. He says: “Compared to the wealth of detail on process and compliance, the content on this topic seems tiny.”
Clark would also have liked to see some ideas on how firms should work with vendors to mitigate risk. “Most AI tools currently being used in audit are not standalone, but features added to existing products,” he points out. “That raises lots of issues around which questions to ask vendors to prevent things going wrong.”
Smaller firms may have specific considerations
ICAEW’s Head of Data Analytics and Tech Ian Pay shares some of those views. “There’s a lot in the guidance that’s really useful,” he says. “It breaks down the risks and mitigations very clearly, and the illustrative examples are great.”
However, considerations for smaller businesses could be a bit more prominent, according to Pay. He says: “The guidance talks a lot about firms having teams of technology and methodology specialists. However, smaller firms aren’t going to have those.
“They may have identified responsible individuals in house, but they will also be leaning heavily on third-party technology and methodology providers. That’s where the guidance doesn’t go into enough detail.”
Use the illustrative examples
Konrad Bukowski-Kruszyna, Audit Data Analytics Director at RSM UK, was part of the FRC Technology Working Group that fed into the guidance. Based on anecdotal feedback he has picked up, there is a healthy appetite among professionals for the guidance, although some have also found it quite dense.
Even though they are not the intended audience, he has advised frontline auditors who were considering reading the guidance to focus on the two illustrative examples, as they couch the risks within practical scenarios.
He says: “What we’re getting at is that you’ll never be able to 100% guarantee that any tool you build will give you the exact output you need 100% of the time, because of the sheer permutation of potential inputs.”
When people analyse data, they tend to apply a traditional, deterministic mindset – ‘I want to get the right answer for this context, and I’m really concerned about accuracy and repeatability’. But, as Bukowski-Kruszyna explains, that’s not how Gen-AI works. He says: “It’s probabilistic by nature. So, it can ingest information, but if you want that data analysed, you’ll need to build a more rules-based tool alongside your AI.”
Encouraging audit and tech teams to collaborate
For Bukowski-Kruszyna, the guide’s aim of supporting system managers will help to foster discussion and engagement between firms’ technology specialists and frontline auditors. He says: “You need to be conscious of how people are going to use your end product.
“Audit is inherently risk averse, and people may be reluctant to change methods that have worked for them in the long term. So, as a system manager or designer, you need to secure that buy-in.”
Looking ahead, Bukowski-Kruszyna says that there is nothing to stop the FRC from issuing further AI guidance on different themes, tailored to different segments of the audit market, as time goes on.
Hear more from Ian Pay
ICAEW's Head of Data Analytics and Tech, Ian Pay, talks about the FRC's guidance in May's Accounting Insights podcast published on 6 May 2026.
Audit and Assurance Conference
This in-person conference will reflect on where the profession stands given the context of audit reform, as well as professional judgement, continuing uncertainty and the impact of AI and technology.