ICAEW.com works better with JavaScript enabled.

The potential for Generative AI is immense, but there are risks that accountants should be aware of when deciding to adopt Generative AI. Existing approaches to tech strategy and wider governance principles, practices, and processes may not be completely suitable for Generative AI and may need adapting and strengthening.

Potential risks of generative AI

Issues to consider when it comes to generative AI include:

  • The output of generative AI can be biased:
    As with other AI models, generative AI models are trained on data sets and when this data contains bias it is reflected in the output of the model. Examples have been seen with text to image generators, which generate outputs that amplify gender and racial stereotypes or chatbots that demonstrate political bias. Examples of bias in general purpose generative AI models (foundation models) also demonstrate that more training data does not necessarily mean less bias – larger volumes can merely reinforce existing biases. 
  • The output of generative AI can be inconsistent:
    This can happen more frequently than some other AI-based outputs. When provided the same input, outputs can vary quite significantly due to the statistical nature of the models. This can pose several challenges when there is a need to rely on predictable, repeatable behaviour. However, by utilising appropriate prompt engineering (discussed later), it is possible to enforce a level of consistency of response, for instance by imposing a specific output structure. However, such engineering could also introduce the risk of manipulation to get a desired response. Ethical considerations such as this are covered in the “Generative AI and ethics” section of the guide. 
  • The output of generative AI can be inaccurate: 
    Generative AI can help spread misinformation. Part of this is due to the potential of models to “hallucinate” ie to create false data and confidently rely on it. The current iteration of text-based generative AI models can be easily led, can make up quotes and references, and even contradict themselves when challenged, meaning they are not always a reliable source of truth. One infamous example was where generative AI was used to prepare for a court case, and it suggested examples of historical cases that were later found to be fictional. In addition, humans can be prone to automation bias whereby they can be overly reliant on the output of technology without questioning it. The risk of hallucinations and inaccurate output can be reduced through practicing professional scepticism and questioning the output of generative AI, by repeatedly asking the AI tool the same question. The variability of response is actually an advantage in this scenario – it is rare for models to repeat the same hallucinations, so repeating the question has a smoothing effect allowing users to focus on the broader output trends across multiple iterations. 
  • The output of generative AI may not reflect the real world:
    Generative AI models can also struggle in fast-paced environments where up-to-date knowledge is critical, as the weighting of training data will invariably be towards older and potentially redundant information. This can be seen with ChatGPT-4, whose creators state that it generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021) and therefore its outputs will reflect this. In addition, as generative AI output becomes more common, it is likely to form a bigger part of the training data used by generative AI models and generative AI such systems will progressively base more and more of their output on input from themselves rather than new creative content produced by humans. Where training data is flawed, for example through hallucination or bias, the output can become more restricted, less reliable and less reflective of the real world.
  • Generative AI can be used to create realistic looking fake images, audio and videos. These are sometimes known as deepfakes:
    Generative AI can increase the likelihood of fraud, as well as economic and organised crime. It can enable cyber criminals to generate more grammatically correct and believable phishing emails, as well as more convincing audio, visual and video content. This could also create a challenge for accountants when attempting to validate information provided by the business, suppliers or clients, and may soon pose significant threats to KYC processes such as identity checks. In July 2023, a deepfake scam video of British financial journalist Martin Lewis gained notoriety for its realism. Accountants should train staff to be aware of this new risk and continue to be vigilant in questioning the authenticity of communication, documentation and evidence.
  • Use of public generative AI tools may breach data privacy and security requirements:
    As a user of a public model, it can be difficult to have control over how data that you input into the model will be used and how access to it will be controlled. It may be shared with overseas organisations and could also be used to train models in the future. It is therefore important to be careful about what sort of information you provide to a tool such as a public chatbot, and to avoid inputting any confidential or personal information.
  • Use of generative AI tools may breach copyright and intellectual property requirements:
    Generative AI foundation models use a large amount of data, some of which may be subject to intellectual property (IP) protection. New content produced by generative AI could breach IP laws and this may not always be visible to the user. This risk can be mitigated to a large extent by using internally generated and owned data to train models. Although this could introduce bias and limit the learning, scope and applicability of the model.
  • Determining responsibility and accountability can be difficult:
    The issue of how to determine responsibility and accountability when a generative AI model goes wrong can be a real challenge, particularly when it comes to foundation models such as LLMs. 
  • Generative AI tools may not always work as intended:
    Although the Large Language Models (LLMs) underlying generative AI models were initially accessed directly via interfaces, there has been a growing trend where developers of the models publish the model Application Programming Interfaces (APIs) to make it easier for other developers to integrate these models into their products. If using a model that uses an API, it is important to note that the LLM behind the API could change due to training or changes to the underlying algorithms made by the third-party. This could break existing prompts and disrupt generative AI based services.

How to mitigate the risks

Techniques such as reviews of training data, effective prompt engineering, consideration of ethics, practising professional scepticism and critical judgement can all help to mitigate the risks and are covered in more detail in the following sections.

As generative AI models and capabilities find their way into an increasing range of products and services, organisations may need to manage significant change programmes. Tech and data strategies may need to evolve. Governance regimes, policies, and controls will need regular evaluation if they are to remain sufficiently robust to assess and manage the potential risks. 

Although generative AI presents specific challenges, some risks are similar to those presented by other types of AI and standards such as ISO/IEC 23894 – A new standard for risk management of AI can help provide guidance on how to mitigate such risks. 

You may also be interested in

Elearning
Finance in a Digital World - support for ICAEW members and students on digital transformation and technology
Finance in a Digital World

ICAEW has worked with Deloitte to develop Finance in a Digital World, a suite of online learning modules to support ICAEW members and students, develop awareness and build understanding of digital technologies and their impact on finance.

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
ICAEW Community
Abacus
Excel

Do you use Excel in your organisation? Are you using it to its maximum potential? Develop your skills and minimise spreadsheet risk with our Excel resources. Join the Excel Community.

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250