Paul Aplin explores the ways in which artificial intelligence (AI) is having an impact on tax work and explains the difference between AI and generative AI.
Last November, I reported on the discussions at meetings hosted by CFE Tax Advisers Europe in Helsinki on developments in tax technology. CFE represents more than 200,000 tax advisers across 33 national organisations from 26 European countries. It provides a forum for the exchange of information about tax laws, administration and practice and maintains relationships with tax authorities and key institutions at national and international levels.
ICAEW’s Tax Faculty has been a member for over 20 years. The Chartered Institute of Taxation (CIOT) is the other UK member. Tax Faculty and CIOT colleagues are heavily involved in the Tax Technology Committee (of which I am a member), the Fiscal Committee and the Professional Affairs Committee (of which fellow ICAEW Tax Faculty Board member Nick Parker is a member). We have played a leading role in CFE’s work on tax technology, regulation and standards in the tax advice market.
Chatting generative AI
At the last physical meeting of the Tax Technology Committee in April 2024, we continued to explore the way in which AI is having an impact on tax work and on the forthcoming CFE report on generative AI in tax. The extent to which colleagues from across the different member states were actively using AI was striking. Here, I would like to share some of the themes that emerged from informal conversations outside the meeting.
One topic was a now-familiar one: the dangers lying in wait for individual taxpayers using ChatGPT to answer their questions about tax. ChatGPT can generate answers that are clear, confident and convincing. Lay users would be unlikely to appreciate the limitations on the data on which the large language model (LLM) has been trained or – particularly in the case of ChatGPT – the date cut-off. In a client context, it is highly likely that some clients will use ChatGPT or other generative AI models to answer their own questions or to challenge their professional adviser. There are very real risk issues here for firms of all sizes: should we warn clients against using generative AI tools, explain the limitations or simply accept that this is now a normal part of professional life and no different to the situation where a client gains their ‘understanding’ from a newspaper article?
Perhaps the clearest example of a lay person encountering these dangers is the recent tax tribunal case of F Harber v HMRC [2023] UKFTT 1007 (TC). The taxpayer put forward nine cases (that she said had been provided by a friend in a solicitor’s office) to support her own case but, unfortunately for her, all nine turned out to be fictitious. They were generative AI ‘hallucinations’. Her appeal was dismissed.
Should we warn clients against using generative AI tools or simply accept that this is now a normal part of professional life?
The risk of hallucinations can be reduced by limiting the material to which the generative AI model is able to refer when formulating its answer to a question. It can be reduced further by specifying a format for the answer or by specifying who the answer is designed for. This fine-tuning process is known as prompt engineering. The practical effect is to significantly increase the quality of the answer.
The principle of limiting the data that the model can access is also important in the context of client confidentiality. A common approach to in-house use of generative AI is to ensure that no client-specific information is retained within the model. While this protects confidentiality, it does limit the effectiveness of the model (which is, in part, dependent upon the amount of information it has been exposed to and has learned from).
The general consensus that emerged over many conversations was that generative AI tools, when used with an appreciation of their inherent risks, an understanding of the data they were trained on and with some prompt engineering were capable of producing highly competent first draft reports for review by someone with the requisite technical knowledge. For some, the time savings have made the use of generative AI a routine part of their practice operating model. As one colleague commented: “It produces draft reports and letters as good as those produced by a junior member of staff in a fraction of the time.”
The topic looks set to remain at the heart of the committee’s agenda going forward.
Rewind: AI v generative AI
AI has been edging into the world of tax for some time. Generative AI exploded on to the scene in November 2022. They will continue to change the way we ‘do’ tax.
I’m sometimes asked about the difference between AI and generative AI. Traditional AI can be trained to recognise something based on the data it has been exposed to. If, for example, it has been shown multiple tagged images of particular animals, then on being shown an image of a cat and asked, “Is this a cat?” it is likely to be able to say “yes”. Generative AI can go further: ask it to draw a picture of a cat, to simulate a photo of a cat, or even to write a sonnet about a cat in the style of William Shakespeare and it will do so.
In a finance context, AI has been employed to categorise transactions in bookkeeping software for some time using algorithms to predict the likely general ledger category to analyse a figure to. Accuracy can be variable, but such predictive technology has become commonplace. Generative AI offers further potential that many software developers have been quick to see, for example to describe in natural language what something signifies or what its tax consequences might be. That is a step-change and one we need to recognise and understand the significance of.
Generative AI offers further potential that many software developers have been quick to see
Use cases for generative AI include contract drafting, automated review of legal documents (such as sale and purchase agreements); review of tax-related material across multiple jurisdictions to identify, extract and summarise the points that are relevant to a particular enterprise and assisting with transfer pricing exercises. Report writing (as mentioned earlier) is another use case and generative AI will inevitably and increasingly encroach into the advisory sphere. Earlier this year for example, KPMG and Blue J announced the launch of “a new generative AI powered product to answer challenging tax research questions in seconds”. PwC, EY and Deloitte have also embraced generative AI as have many other firms. Tax authorities have also seen its potential in tax administration and identification of compliance risks.
ICAEW resources
ICAEW has some great resources on AI and generative AI. A good starting point is the Generative AI Guide and there are also AI modules within ICAEW’s online Finance in a Digital World programme. New material is being added regularly.
When ChatGPT burst onto the scene in November 2022, some said that it would be revolutionary in its impact while others pointed to its shortcomings. Almost two years on, generative AI has been tested and employed by businesses and professional firms of all sizes. As conversations at CFE confirmed, there are still many issues to address including risk assessment, use parameters, governance and data security. Notwithstanding those issues, AI and generative AI are now firmly embedded in the world of tax. We need to understand – and then harness – their potential.
Paul Aplin, member of the Tax Faculty Board
- TAXguide 08/24: Payrolling of benefits-in-kind and expenses webinar Q&As
- TAXguide 07/24: Tax treatment of travel costs for directors of VC portfolio companies
- TAXguide 06/24: Taxation of cars, vans and fuel Q&As
- TAXguide 05/24: Payroll and reward update webinar Q&As
- TAXguide 04/24: The cash basis for trades: Q&As