ICAEW.com works better with JavaScript enabled.

AI governance: what boards need to consider

Author: ICAEW Insights

Published: 23 Mar 2026

Amid rapid pace of technological advancement, a panel talk reminds ICAEW members that AI tools are not infallible – and require an attentive governance approach from boards

According to an assessment by Peter Lee, Partner at Simmons & Simmons Partner, Artificial intelligence (AI) dramatically increases an organisation’s operational tempo. Speaking at ICAEW’s Corporate Governance conference on 6 March, Lee – head of his firm’s AI Governance Advisory Practice – sketched out a bracing illustration of where that speed is taking us.

“People are supercharging themselves with this technology,” he said, “to the extent that I’m hearing talk of when the first one-person unicorn will emerge. I recently met with a founder who said that a year ago, their business may have needed 50 staff. Now, thanks to vibe coding, they can develop in a day with just four people what once would have taken them two months.”

That’s just one example of AI’s immense scope to revolutionise how organisations and the people within them work. But that vast potential requires good AI governance.

The connection between AI use and company purpose

For Lee, there is a direct link between a company’s purpose and how it uses AI. That makes it particularly important for boards to set the strategic tone. “Whether you’re compliant with the EU AI Act is a matter of law,” he said. “But whether you’re operating in a way that your employees and stakeholders are ethically comfortable with is another matter.”

Even amid the rapid pace of advancement, Lee said, users must bear in mind that AI tools are still prone to bias and hallucinations. Some use cases are high risk. Tools are not always explainable. AI may change the nature of decision making within organisations, especially if they are using agentic AI – where the idea is not to have a human in the loop. And concerningly, AI’s behaviour can change over time. “There’s the concept of drift,” Lee said, “where a tool’s rationale can shift away from how it was first designed, in ways that can accentuate any bias in the underlying Large Language Model.”

In addition, Lee noted, AI could blunt the critical thinking of knowledge workers and increase the available surface area for cyber attacks, through methods such as prompt injection. As such, he said: “Ethical and reputational risks are heightened by the use of AI.”

Board members need to start using AI

Pauline Norstrom, CEO of AI strategy and risk advisory firm Anekanta® AI, has worked extensively to blend different business cultures during M&A projects and integrate systems from multiple suppliers in transformation efforts. In her view, while boards are responsible for a strategic focus on risk, and audit committees for spotting internal red flags, their reaction to AI has largely been: “Isn’t that with IT? What’s that got to do with us?”

“The adaptation of those roles isn’t there yet,” Norstrom noted. She warned that a culture of fear is impeding progress. “I’ve seen companies where the CIO has been instructed to lock everything down,” she said. “So, employees who could benefit substantially by experimenting with widely available AI products aren’t allowed to use them at work. The way to overcome that fear is to gain a better understanding, not only of how the tools work, but of what they can help you achieve and how they can be responsibly adopted.”

Tuomas Syrjänen, Co-founder and Chair of digital transformation specialists Futurice, echoed those points: “Board members should start not merely by asking people’s opinions on AI tools, but by actually using them. In other words, walk the talk. Whenever I ask senior figures, ‘What do you want to do with these tools?’ and only the IT director replies, then I know we’re in trouble. But if the other leaders say, ‘I want it to help me maintain the business in this way,’ that’s a much better situation.”

AI literacy is key

AI literacy training in the workforce was cited at the conference more than once as a foundation for sound governance. Lee pointed out that, from a legal perspective, AI literacy is mandated in the EU AI Act. Norstrom, meanwhile, took the practical view that literacy around datasets would be a major asset for accountants when evaluating AI outputs. “For example,” she said, “it’s fairly well known that Open AI was trained on Reddit. If you have that level of awareness to know what’s under the hood, you can use your professional scepticism and say, ‘What this model is telling me doesn’t align with my expectations.’ As professionals, you should have a feel for what the outcomes ought to look like.”

Ethical issues: sustainability and EDI

The discussion then moved to key ethical topics that organisations must consider. On the jobs question, Syrjänen said that we are seeing “a redefinition of professional identities.” For instance, if customer service agents are replaced with AI, they should ideally become system builders, continuously improving the software. “That's a big change,” he stressed.

Turning to sustainability risks and climate impacts, Lee urged organisations to set rules that distinguish tasks requiring the input of deep-reasoning models from those that could be addressed with quick queries on, say, Google’s search engine. “That also ties into training and awareness,” he said. “Most people don’t think about AI’s energy consumption, which can be massive.”

On equality, diversity and inclusion, Norstrom highlighted the importance of bringing ideas from as many backgrounds as possible into board decisions on AI governance. “Right now,” she said, “if you look at board composition, it’s still skewed towards one demographic. That means their data is going to be skewed – and that their insights and ability to spot potential issues will also be skewed.”

Picking up on Lee’s nods to the EU AI Act, ICAEW Head of Tech Policy and panel chair Esther Mallowah pointed out that many standards and pieces of guidance exist to steer boards on key components of AI governance – from AI policies and inventories to best-practice risk management. “For example, if you look at ISO 42001, that’s really helpful, as is the text of the EU AI Act – even if you don’t have to comply with it, it can be a helpful guide.”

Accounting Intelligence

ICAEW has created a suite of resources to support members in building their understanding of AI, including opportunities and challenges.
Browse resources Masterclass videos
Cut out of laptop device with generative AI prompt screen.

You may also be interested in

ICAEW Community
Boardroom
Corporate Governance

Stay up to date with the latest news and developments in corporate governance, to help you in your role as a board member, NED or corporate governance professional. Membership is free and open to everyone

e-learning
 Promo image of laptop for Gen AI accelerator programme
GenAI Accelerator

Gain the skills to harness the power of GenAI with ICAEW's flexible, bite-sized online learning programme. Learn how to transform the way you work in a way that suits you.

Find out more Enrol
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon