Harnessing artificial intelligence (AI) to transform a business all stems from a project management mindset, according to Nina Barakzai, AI ethics and data privacy expert, and qualified accountant. “It’s all about the steps and checks you put in place to help you navigate the journey,” she says. “As a finance person, I like to ask, ‘Where’s the value? Which problem am I trying to solve? How am I going to allocate money to this and be able to gauge that it was spent wisely?’”
Barakzai, who is speaking at ICAEW’s Annual Conference next month on driving AI adoption with integrity, says: “You have to put those value judgments in place. And accountants are quite rigorous when they do that, particularly when they are responsible for highlighting triple constraints of scope, time and cost."
Prefer to listen?
This audio file was produced by AI and has been adapted from the original article for audio purposes.
Technical debt
At the outset, forming a clear business case is critical, says Peter Beard ACA, Director of GenFinance.AI and a partner in the creation of ICAEW’s GenAI Accelerator. “When you’re adopting generative AI (GenAI), you first need to think about your end goal,” he notes. “Generally, you’ve got three choices: do you want to make something faster, better or cheaper? It’s not always feasible to get the ‘holy trinity’. But applying one or two of those criteria will help you clarify your approach."
Beard, who will be sharing real-world examples of AI early adopters at the Annual Conference, explains that when looking at how to identify the use case, team skills in a typical finance function are growing at a linear rate, while AI’s capabilities are growing exponentially.
As such, there will be key areas of compounding ‘technical debt’ in the business which, if left unaddressed, will only deepen. So assessing where your most critical technical debts are mounting, and where the costs of inaction could be most severe, will enable the business to know where to apply AI and relevant upskilling at the earliest occasion.
When it comes to pinning down the right tool for the job, Beard says that the solution will typically come from one of the ‘Big Six’ foundation models: ChatGPT, Perplexity, Claude.ai, Grok, Microsoft Copilot or Google’s Gemini.
In Barakzai’s assessment, every model on the market – whether major or emerging – has its own quirks and foibles. So adopters must be discerning, she says: “The main questions you should ask are: has the model got the right dataset to address your problem? And, looking at how it’s being trained, does it have the right ingest?”
She warns that if the tool doesn’t have the type of data suited to your questions, you will not get consistent, robust, resilient and relevant answers. “It’s just going to surface the most popular responses. There is also the challenge of the money and effort required for a company to keep its IT systems up to date and capable of meeting business needs.”
Learn by doing
Before and during the onboarding process, senior managers must ensure that they are bringing employees with them on the journey. In tandem, they must be open about the aims of the AI adoption path with external stakeholders – for example customers, suppliers and investors.
On the first point, Barakzai says: “Coming from a compliance and regulatory background, it’s clear to me that understanding what’s going on and explaining it effectively are essential.”
She notes the key questions that employees will want answers to: what’s the tool going to do? How will it help me, personally? And how will it help us as an organisation? “In some ways, the journey is more important than the destination and people tend to learn by doing. So get staff involved with using the tools collaboratively, and help them see that everyone’s getting there together.”
Barakzai stresses that a collaborative approach will also support external messaging. In particular, it could provide a foundation for some business relationships to reboot and move forward from a whole new set of principles.
“AI provides opportunities for customisation,” she says. “Some of your relationships will date back years and your stakeholders themselves will be iterating to create new products and services. So bring them in, explain what you’re doing and ask, ‘How can the ways in which we’re innovating with AI tools help you to achieve some of your own goals?’”
With investors, Beard says, it is important to be as transparent as possible to secure backing for AI projects. “It’s about expressing how fundamentally and systemically AI changes the game,” he says. “Thanks to its sheer versatility, the scale and scope of this wave of transformation is unlike anything we’ve seen before.”
Higher stakes
Inevitably, AI tools will throw up occasional curveballs or surprising results that stem from unknowns around their functionality. Beard explains that many AI models now come with a ‘reason mode’, enabling users to see not just an answer, but how the tool arrived at it.
In terms of guaranteeing ethical hygiene around your chosen solution, Barakzai urges users to consider implementation and use in the context of five, widely accepted responsible AI principles. In other words, is the tool:
- human focused,
- privacy and security enabled,
- inclusive,
- robust, and
- resilient.
Safeguarding against errors and their knock-on effects, however, requires a comprehensive risk-management approach. For Beard, the three biggest factors that could lead things to go wrong are an improper platform, improper training and improper use. So businesses must be particularly sensitive to those risks. In parallel, he says, they must strike a balance between the triad of effort, impact and risk. That will involve segmenting out low- and high-risk tasks, considering the potential impact of errors in those categories and apportioning risk-management efforts accordingly.
“For example, a low-risk prompt may be: ‘Create a summary of our oldest customers who are behind on their bills.’ That wouldn’t necessarily have to be accurate down to the pound and penny – just correct enough to give you solid data for internal use. Conversely, ‘Produce and submit the business’s VAT computation’ has much higher stakes attached to it, because the cost of error would be severe."
So is there a point where the business can say that an AI transformation project is “finished,” as such? “It’s constantly iterative,” Barakzai says. “You need to re-evaluate your solution quite regularly. But you must also take care not to automatically assume that a choice you made a while ago serves as a guideline for today and tomorrow. Businesses have developed agile approaches to address changing regulatory landscapes. So sometimes it may be quicker and more effective to simply start over with a new model. Do you want to recalibrate by getting a better pen, paper and envelope – or by investing in telepathy?”
Real-world AI Insights
ICAEW's Annual Conference 2025 includes sessions covering how AI is already being used and how to address the challenges of implementation.