For any Chartered Accountant, balancing the rapid evolution of new AI capabilities against the need to run a business or deliver client work can feel overwhelming. This article gives practical suggestions for how you can bring the best of people and AI to your analytics approach, informed by hands-on experience managing AI quality in audit.
Ethics
As of 2025, there’s a requirement in the Ethical Code to ensure you’re technologically competent – that you understand any technology you’re using sufficiently to be able to be confident in the quality of the work you’re doing. Before embarking on using AI in analytics, you should reflect for a moment on how you’re going to get that competence and confidence in your ability, and how you would assure someone using the outputs that you’re competent to build the analytic which produces them.
That shouldn’t put you off from experimenting. Instead, stay reflective during the process of using AI, challenge your own thinking and confidence, and make sure that, by the time you finish, you’re fully confident you understand every step in your analytic and how it can be used safely.
Regulation
Auditors especially need to be aware of the evolving regulatory context applying to AI. The FRC has released guidance on the use of AI in audits, and on Generative and Agentic AI. This guidance is detailed and includes useful worked examples of how AI should be used in audit analytics and is worth a read for those outside of audit as an example best practice. If you work in a European context, be aware of the EU AI Act.
Planning
Good planning for using AI in analytics looks a lot like good planning for any analytics, and that starts with thinking about the purpose. What question(s) are you trying to answer, for whom, with what data, and why? What action do you want people to take as a result of your analytic, and who should take it? Do you have the right data collected in the right way to make that happen? What are the associated risks? And is AI the right way to go about it? A lot of traditional analytical techniques may be just as good or better at answering some of our questions. Generative AI is not deterministic in nature – it may not always give the same output for the same input – which in an audit context should encourage reflection before adoption. But if you can articulate answers to the above questions, then you are already well on the way to framing the problem in a way that AI may be able to support.
Where AI is part of the solution, think about (and write down!) what risks you take by using AI and how you’re confident you’ve mitigated them. For example, in audit we need to be confident that any evidence we rely on in an analytic hasn’t been hallucinated by an AI, so we build our AI analytics with guardrails like retrieval augmented generation (RAG) and sourcing links to help avoid this risk.
Data
Any analytic is only as good as the data. With AI, you need to consider additional concerns. For example, when you collect data from third parties, do you have permission to re-use it for this kind of AI analysis? If not, you could end up in legal difficulty. Can you actually answer your questions with your data? AI is more likely than traditional analytical techniques to give you a plausible sounding but wrong answer using inappropriate data, and you need to be alive to the risk that a model will mislead you because it’s trying to please you. Finally, is the AI set up to protect the confidentiality and integrity of the data you’re using? Does all the data stay within your perimeter or is some fed back to the model or being posted to public versions of tools?
Design and Build
If you’ve done your planning right, this stage should be straightforward. Keep in mind the purpose of your analytic and the ethical and regulatory boundaries you need to stay within, and you’ll be on the right track.
The really exciting bit is how you might use AI to enhance your skills. Generative AI tools are now very effective at building analytics with the right prompting. Try sharing your planning and some example data with a tool like Claude or Chat GPT (anonymised and with any necessary permissions!) and ask them to suggest how you build an analytic, or to build you a demo or example. If your data is properly structured, Copilot in Excel or PowerBI can also help you design and build analysis or dashboards. If you’re doing this with one of the more free-form tools, asking it about what it’s doing, what decisions it made and why will help you grow your own understanding both of the analytic and the technologies it’s built with, so you learn as you build. Similarly, you’ll want the tools to be transparent with you, so sharing any code that’s been used to perform the analytic. Typically, AI tools write and execute Python for data analysis, so get them to provide annotated versions of the scripts that explain what is happening at each stage.
Quality Assurance
As auditors, we say, “If it’s not on the file, it didn’t happen,” and the same is true for analytics. Your analytics need documentation.
Best practice is to have someone not involved in building to test your analytic against expected behaviour and objectives. Independent review against your documentation should help ensure you’ve not missed anything or made mistakes: we all struggle to see where we might have drifted or made errors when we get too close to something. Document your tests and outcomes, so you have something you can provide to ensure the users of your analytic that it’s appropriate and high quality.
One key aspect of quality assurance is whether an analytic works for the people and purposes it was designed for, so asking these people to test it will be key, ideally from an early stage to make sure you have time to build in feedback.
Monitor and evaluate
Performing an analytic and running with it is never a good plan, but with AI it’s even more risky. Model drift (where an LLM starts moving further away from its initial behaviour through iterative learning), shifts in the underlying data, and changing user behaviour all mean that AI-driven analytics are more at risk than traditional analytics of losing accuracy, quality or usefulness over time. It’s good practice to build in regular review points: every six months or when you become aware of any change which might have an impact is a useful standard, though with new model capabilities coming out all the time you might want to review more frequently in some cases. It may also be advisable to re-establish a baseline at the start of each audit cycle, as it can be tricky to spot evolutions in the models’ outputs when the data being provided as an input also keeps changing.
Again, as with any analytic, it’s also worth taking a pause a few months after releasing anything to check in and look at the impact it’s had.
Reflect
Using AI in analytics shouldn’t be scary: if you’re already building analytics, you’re probably already doing many of these things. Good-quality AI in analytics isn’t that different to good quality in analytics in general. It’s about understanding what you’re trying to achieve and the tools you’re using to achieve it.