ICAEW.com works better with JavaScript enabled.

Don't let AI tools land you in hot water

Author: ICAEW

Published: 12 Feb 2024

With the rise of Artificial Intelligence (AI), it’s fair to say that there are many benefits for professionals and firms to enjoy. However, it is critical that members continue to apply their professional judgement when using Generative AI tools such as Chat GPT. When balancing the benefits and risks involved in using the technology, the professional ethical implications of AI must be considered, and we cover some of these principles in this article.


We don't need to point out the importance of adhering to confidentiality requirements in client work, but do you realise that inputting client data of any kind into tools such as Chat GPT is a potential breach? Once this data is in Chat GPT (or other Large Language Models), there is limited visibility and control over who the information is shared with, how it is secured and how long it is retained. Essentially, it’s in the public domain. Our advice is to approach with caution. If any of the information you are considering inputting into AI tools includes client data, then it's very likely that this will breach client confidentiality.

We are aware that some firms have created their own secure version of Chat GPT, and where these exist, we advise members to follow the guidelines of their own firms.


It is important to note that AI tools are trained using data already available on the internet which is very likely to contain inaccuracies and also be biased. There have been some high-profile cases where bias in artificial intelligence systems was identified leading to decisions being made using biased data. One example is Amazon in the US who developed an AI programme to review the CVs of job applicants which was then found to discriminate against women for technical roles as it had learnt that successful applicants over the years leading up to this project had been predominantly male.

Members must also be alert to the risk of automation bias which is a tendency to favour information generated from a computer as it is assumed to be correct. This may not be the case given that it has been shown that where AI tools do not have the answer available, they will 'hallucinate' and make up information, including fake references and sources. Given that this information often appears to be convincing, it is important to retain professional scepticism and question the output. Just as you would apply some level of review and oversight to the work of a trainee staff member, we would advise you apply a similar level of review to the information provided by AI.


Where AI is used to develop deliverables which are then shared with employers or clients, consideration should be given to the level of transparency required. Being honest in the use of Generative AI is essential, especially if it has been used to support research or analysis that is being delivered to a client.


As with any new software system, it's vital to ensure that you understand the technology including the risks and limitations and how to effectively manage these so that the output can be trusted. Following the the principle of professional competence and due care in the ICAEW Code of Ethics, a professional accountant is required to exercise sound judgment in applying professional knowledge and skill when undertaking professional activities.

"Remember that AI tools are not fully qualified accountants," says Sophie Wales, Head of Regulatory Affairs and Policy at ICAEW. "I would advise members to be alive to the risks which come with AI. Make sure you apply appropriate professional scepticism and don't blindly trust the information that is provided. Be careful not to breach confidentiality, either of your client or employer. Please take a minute to consider the professional ethical implications of your use of AI."

ICAEW has published a comprehensive Generative AI Guide with input from other technical experts to help you explore the use of Generative AI, including identifying opportunities for where it can be used effectively and the ethical considerations to keep in mind.

More information