ICAEW.com works better with JavaScript enabled.

Topical guidance covering the application of PCRT to the ethical use of artificial intelligence tools

Helpsheets and support

Published: Today at 09: 10 AM GMT Updated: Today at 09: 10 AM GMT Update History

This topical guidance is intended to help members in applying the PCRT fundamental principles when using AI tools for tax work.

Who is this guidance relevant to?

This guidance is relevant to any PCRT body member or regulated firm who is using (or considering using) artificial intelligence (AI) tools as part of their work when advising on UK tax matters, which impacts upon the tax affairs of any organisation or individual. This includes all members who work in practice, those working in business, and members who work in a public sector body or government department. As noted in paragraph 1.3 of PCRT, where the guidance refers to a ‘member’ (and ‘members’), this also includes ‘firm’ or ‘practice’ and the staff thereof.

Introduction

This is not intended to be technical guidance on the use of AI. Instead, it serves as topical guidance to complement the main PCRT guidance, specifically in the context of the ethical use of AI for tax work.

Where a member is not familiar with the use of AI, but is seeking to use this as part of the services provided to a client, they may need to consider consulting an appropriate AI specialist in order to undertake the work (PCRT paragraph 2.11). When applying the fundamental principles in using AI tools, there is some overlap between areas of this topical guidance, and members should consider the guidance in its entirety.

Members have a responsibility at all times to adhere to the Fundamental Principles and the Standards for Tax Planning set out in PCRT, and for ICAEW members the ICAEW Code of Ethics. Tax advisers have a responsibility to serve their clients’ interest, whilst upholding the profession’s reputation and the need to take account of the wider public interest. Adhering to the principles and standards set out in PCRT will ensure that this is achieved.

Further assistance

A member should refer to PCRT and the associated Help Sheets on the website. They can also seek further guidance at icaew.com/regulation, via ICAEW Ethics Enquiry Line on 01908 248250 and ethics@icaew.com, and from the Support Members Scheme 0800 917 3526 and support.members@icaew.com. Where appropriate, guidance may be required from specialist or legal advisers.

Members are ultimately responsible for any work they produce, and for regulated firms any work which the firm prepares, irrespective of the use of AI tools in its creation. It is essential to oversee any work facilitated by AI tools appropriately and diligently. PCRT paragraph 3.2 outlines the Standards for Tax Planning, and the requirement for members to apply professional judgement, which extends to the usage of AI. This guidance should be read alongside PCRT, as the general principles continue to apply.

If a member fails to adhere to the principles set out in PCRT they are liable to be subject to the disciplinary process.

What is AI?

OECD countries currently define AI as follows:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

Please note that throughout this document there are various references to AI models, tools and algorithms, as well as other terms which may be used interchangeably but refer to the same subject.

An appendix can be found at the end of this topical guidance, which includes some examples of the types of AI tools available.

Examples of how AI is currently used in the tax/accountancy sector

Example areas where AI tools are being used in the provision of services to clients are listed below – please note that this is not an exhaustive list.

In all cases it is important to remember that outputs from AI tools should not be used as authoritative tax or legal advice, with reviews to be undertaken by a qualified professional in the specific context of the client to whom the advice is being provided.

Tax Compliance Services

Collating potentially large amounts of data provided by a client and identifying the key information relevant to their tax filing obligations. This can range from the analysis of an investment portfolio, pension contribution reports and cryptocurrency transactions, to performing mixed funds analysis and the input of data onto a tax return.

Tax Advisory Services

In report writing, developing a comprehensive overview of a client’s position and the range of options available to the client. This can include for example, the preparation of a document outlining the Inheritance Tax position for a client, the allowable lifetime gifts they could make, and the relevant thresholds and exemptions that would apply, all within the context of their personal circumstances and in line with the applicable laws and regulations.

Client Due Diligence processes

AI can assist with process automation in professional accountancy and tax services where Anti-Money Laundering and Client Due Diligence checks are performed. It can deal with large amounts of data, and through analysis of and recognising patterns in the data, it can support identifying and verifying clients and performing risk analysis. It can use sources such as publicly available databases, sanctions lists, media screening and facial recognition in images using a wide range of tools.

Mergers/Demergers & Acquisitions

Collecting and processing data to identify risks and opportunities for a client as part of a project, including identifying potential acquisition targets through performing an analysis of the target’s performance and the market conditions. AI can be used to perform due diligence processes, calculate valuations and highlight areas of concern, all within a governance framework.

Technical research

Performing a review of laws, regulations and case law, alongside guidance available from regulators, professional bodies and tax authorities to identify areas of contention or concern as part of the services provided to clients. This can also include understanding the interpretation of a law based on a particular set of facts and circumstances.

AI ethics and PCRT – the fundamental principles

1. Integrity

1.1 The fundamental principle of integrity requires members to be straightforward and honest in all professional and business relationships. Integrity also requires a member to not be associated with misleading information.

1.2 Transparency is an important element of ethical AI usage. Members should consider the level of transparency provided to clients in relation to the use of AI tools in any work performed.

1.3 Including an appropriate statement in the engagement letter which specifies the potential use of AI tools (eg, indicating that AI-enabled software may be used) can help support transparency with the client. Consideration should be given to disclosure to a client of any actual use of AI tools at the time that the deliverables are provided. Disclosing the use of AI tools can assist a member in upholding this principle by not knowingly or carelessly misleading a client by omission (PCRT paragraph 2.4).

1.4 Where a client queries an element of the work, members should be able to explain how the conclusion was arrived at, even if the work was generated by AI. Members should avoid overstating the accuracy of the data output or misleading the client on any conclusions made. For instance, if the AI tool uses guidance from a website, including HMRC guidance on gov.uk, and the client raises a query on this within the work provided, the member should be able to explain the basis on which the guidance applies and provide a reference to the data source if appropriate, which will help avoid misleading by commission or omission. The concept of “explainability” is covered in greater detail in section 3.1 below

1.5 Clients expect to be able to trust the services provided by a member. Therefore, careful consideration should be given to the level of reliance placed on any AI-generated data. For instance, if an AI tool identifies several deductible items a client could potentially claim, and some of these are based on material uncertainty in the law (PCRT paragraph 3.6), the member should disclose this information transparently to the client. The member should be mindful of the risks of using AI tools in relation to the reliability of the output rather than blindly accepting the AI output. Undertaking appropriate due diligence on the output provided by an AI tool can help safeguard against the risks, and this is covered in further detail in the professional competence and due care section below

1.6 A member is responsible for the work done by staff and others under their supervision. The implementation of safeguards to effectively mitigate the risks of using AI tools can support members in applying the fundamental principle of integrity to their work.

Ethical risks – integrity

Possible safeguards

A lack of transparency with clients about the potential use of AI tools in the delivery of services.

Members may consider including a disclosure in the engagement letter that AI-enabled software may be used in providing services. Consideration should be given to disclosing to a client the actual use of any AI tools used in the deliverables.

Where the use of AI is fundamental to the deliverable, it may be appropriate to inform the client directly prior to commencement of the work.

Overstating the accuracy of work performed with the assistance of AI tools.

Clients and colleagues (including those in industry roles) may query the work presented to them and the basis for the conclusions reached. Members should ensure that they can justify the conclusions reached by the tool, including which sources of information it has drawn upon.

This may, for instance, include outlining any assumptions made in generating the results, or the requests (known as prompts) given to the AI tool generating the content. This can assist a member in not misleading on the data output.

Where relevant, members should also consider downloading and retaining copies of relevant webpages and screenshots of prompts as part of their audit trail on record. This can be used for future reference and to evidence that reasonable care was taken.

Staff use AI in their work inappropriately and/or do not disclose its use to senior colleagues.

When a staff member uses an AI tool in their work, they should disclose this to senior colleagues. This would apply equally to members in business and those in professional practice. Members should consider maintaining an audit trail, including details of the AI tools used in case of subsequent queries.

For example, an AI usage policy may outline the acceptable use of AI tools and staff disclosure where an AI tool has been used, which can help safeguard a firm against unknown/inappropriate use of AI.

2. Objectivity

2.1 This principle requires members to avoid bias, conflict of interest or undue influence overriding their professional and business judgements. Bias can be introduced at any stage, such as in the development of the tool, the data it is trained on, and the interpretation of the output by the user.

2.2 Bias that occurs through the generation and interpretation of the results is known as “automation bias”. This is a tendency to favour output generated from automated systems, even when human reasoning or contradictory information raises questions as to whether such output is reliable or fit for purpose.

2.3 To mitigate the risk of bias, it is important for members to have an awareness of the data sources used (where possible). Members should also have an awareness of how their own unconscious bias may impact their interpretation of data sources and the generated results. The risks can be managed, for example, by using data from known and trusted sources that are not influenced by subjective opinions or biased information, or by ensuring that prompts do not include biased or subjective phrasing. AI tools are designed to create a response to satisfy the request rather than constructing an unbiased and fair output.

2.4 Managing the risk of bias can also be achieved through reviewing the AI tool’s output to identify common themes or tendencies in the results (eg, favouring particular groups, races or genders) which may indicate a level of bias in the tool’s processes. Performing sense checks on data and using professional judgement can help mitigate the risks. This is covered further in the professional competence and due care section below

Ethical risks – objectivity

Possible safeguards

A member is interested in using an AI tool in their business after hearing positive things from somebody who works for a firm that has adopted an internally built tool.

The internal model has been designed to use a restricted number of trusted data sources in order to limit the risk of bias or subjective data influencing the generated results.

The member is unsure which tool to use, and decides to try out a free publicly available tool.

The member should be conscious that every AI tool is different, and the publicly available tool they plan to use has not been designed in the same way as an internally built tool using ring-fenced data.

The tool will not necessarily generate an accurate or unbiased output and will not have the same restrictions in place on the data sources it uses. The background data may include websites which are subjective and include bias towards a particular tax planning approach.

Whilst this would not necessarily preclude a member from using a publicly available tool, they should use their professional judgement to determine which tool is appropriate.

To help safeguard against the impact of biased or subjective data, the member should seek to understand which data sources have been used and how they may have influenced the results.

The member would also need to have due consideration for the fundamental principle of confidentiality when using public AI tools (see section 4 below)

A member regularly uses a specific AI tool which has provided effective results for a number of clients/pieces of work previously. The tool is designed to assist on scenarios where there is a common need eg, a high net worth individual who is a non-resident taxpayer, or specific types of VAT claims for a firm completing internal filings.

The member wants to use the tool for other clients/pieces of work with different circumstances, as a way of saving time based on their experience of using the tool previously. The tool has been designed to incorporate assumptions which do not reflect the circumstances of the work the member now wants to use it for.

Understanding the limitations of an AI tool is an important factor in being able to identify where this may not be suitable for a particular client or piece of work. This can be achieved through training on how to use the tool, understanding what it has been designed to achieve and recognising potential assumptions and bias within the data (covered in section 3 below)

To mitigate the risk of bias, the data output should be assessed to identify any prejudice in the results from the data sources, or the way in which the AI model has interpreted these.

Within the Standards for Tax Planning outlined in PCRT, the client specific standard (see paragraph 3.2) requires that tax planning must be specific to the particular client’s facts and circumstances.

Using a tool to save time should not be at the expense of an increased exposure to bias, or relying on data which does not apply to the current circumstances.

A member is familiar with and regularly uses a specific AI tool.

An updated version of this AI tool is introduced which promised additional functionality and the member wants to start using this.

The use of an updated tool should be approached with caution until the operational effectiveness of the tool has been demonstrated to be at least of the same standard as the earlier version.

Professional competence and due care

3.1 As outlined in PCRT paragraph 2.11, members must carry out their work with proper regard for the technical and professional standards expected of them. Members are also required to maintain professional knowledge and skill at the level required to ensure competent professional services are provided (PCRT paragraph 2.2).

3.2 Members need to ensure that they are sufficiently competent in the services that they provide to clients, and this extends to the use and implementation of AI tools. Competence can be maintained through continuing professional development, ensuring an understanding of the relevant technical, professional, and business advancements enabled by the use of AI. Where a member wishes to incorporate AI tools into the services they provide, but they are not familiar with the tool and appropriate and sufficient training has not been undertaken, they should consider consulting an appropriate specialist (see PCRT paragraph 2.11).

3.3 The principle of professional competence and due care requires members to ensure that staff receive appropriate training for any AI tool used. This enables them to understand how the tool functions, interpret its outputs, and explain this information accurately to clients.

3.4 The output from an AI tool should also be regarded as if it were prepared by a less experienced junior colleague and reviewed with appropriate scepticism. AI tools are known to ‘hallucinate’ information in the data output in order to generate responses to satisfy the input request. Hallucinations are where an AI tool (often a Generative-AI tool) produces data which is nonsensical and/or inaccurate, but which is presented as factual in the response. This can result in inaccurate and misleading information being included in work if this is not identified through performing sense checks of the output. Confirming the existence of case law or legislation referenced within the output can help mitigate the risks associated with hallucinations (as seen in the case of 'Harber v HMRC [2023] UKFTT 1007 (TC)').

3.5 As highlighted at the outset of this topical guidance, members remain ultimately accountable for any work produced, regardless of whether AI has been involved in producing the work or refining work already produced. Due care can be applied to the use of AI models through performing due diligence on the output from the AI tool using a risk-based approach. Where the member has control over the design of the model, the model would also need to be updated to ensure it remains accurate, relevant and complies with any regulatory changes.

3.6 Exercising due care and professional scepticism enables a member to determine if the data output accurately represents the client's specific circumstances.

3.7 Besides ensuring services are performed competently, members should also ensure their work is based on current developments in practice, legislation, and techniques. Using outdated AI models can lead to compliance risks and incorrect advice, which may result in breaching the PCRT Standards for Tax Planning. For instance, if an AI tool output refers to a specific tax-saving strategy, the member must exercise due care, review the data produced by the tool, and ensure it aligns with current legislation etc. (see PCRT paragraphs 2.10, 2.13 and 2.14).

Ethical risks – professional competence and due care

Possible safeguards

Using an AI model to produce a report for tax planning purposes, without understanding which data sources the tool relies on to generate its response.

Obtain a broad understanding of the AI tool to ensure that it is appropriate for the task being completed. Data sources referenced and content produced should be relevant to the client’s specific circumstances and in line with the Standards for Tax Planning outlined in PCRT paragraph 3.2.

Members are at risk of relying on hallucinated content such as non-existent case law or subjective data sources which promote schemes which are irrelevant to a client’s tax affairs, or worse are fictitious or contrary to the applicable laws and regulations.

Where publicly available AI tools are used, the sources used by the tool to generate the response should be assessed to determine their accuracy and objectivity. The generated response should also be reviewed to ensure that this accurately reflects the source data, is based on factual information and is in line with the relevant laws and regulations.

By performing a review of the generated data, a member can also identify discrepancies and inaccuracies in the content of the work, including potential hallucinations which do not reference existing accurate resources.

A member wants to adopt the use of an AI tool within their firm, but the relevant staff do not currently have sufficient or appropriate training on the tool.

Members need to ensure that they have undertaken sufficient training in order to be competent in using an AI tool.

Each tool is designed differently, and will require varying levels of training to understand how to use it correctly. It is not possible to define how much training is required as this will depend on the proficiency of the member, or the relevant staff members, as well as the task being undertaken.

If the tool is used without sufficient competence, the work produced may not be of an appropriate standard due to errors or bias. The firm may want to arrange for relevant staff to complete some training, or provide resources for some staff to be initially trained before wider adoption.

For example, generative-AI tools may produce a different response each time a new prompt is submitted, even if the wording of the prompt remains unchanged. Training on how to structure prompts effectively to produce accurate results can support a member in developing their competency in using AI tools.

A client has provided information to a member which has been generated using an AI tool.

Members should exercise due care and professional scepticism when data is provided by a client or a third party which may have been generated by AI. This would include considering the reasonableness of the data in relation to the client’s specific circumstances.

A firm has started using an AI tool to transcribe minutes from recorded meetings. This is assisting teams by quickly summarising the key points discussed.

Some members may record meetings with clients or third parties. The minutes may be generated or used by AI to inform the advice given, or to create a factual section of an advice note.

The minutes should be reviewed to ensure that they are complete, accurately reflect the discussions, and are professionally worded.

Using tax return software that can automate the preparation and submission of a tax filing and can be configured to make human review optional.

Obtain an understanding of the workflows that can be automated and the options for ensuring that the data and the submission can be subject to an appropriate review.

By performing a review of the data and the draft submission, a member can identify discrepancies and inaccuracies before submission.

4. Confidentiality

4.1 Members may only disclose information to third parties with proper and specific authority from clients, unless there is a legal or professional right or duty to disclose (see PCRT paragraph 2.16). Having due regard for client confidentiality (as well as third parties, former clients etc.) extends to the use of AI.

4.2 Some organisations have established internal, ring-fenced AI models with strict controls over the handling of client data to mitigate the risk of a breach of confidentiality. The input of client data into publicly available AI tools is likely to constitute a breach of client confidentiality, unless the client has consented to this.

4.3 As covered in the integrity section above, members may want to consider reviewing the engagement terms in place with a client regarding how data may be processed eg, inputting client data or a member’s own work into an AI system. Clients can also be directed to a data handling/AI usage policy on the firm’s website.

4.4 Data input into publicly available AI models should be anonymised and generic to ensure that the client cannot be identified from the information and that client confidentiality is upheld.

4.5 When information is entered into publicly available AI tools, control over that data is relinquished. The data may become part of the public domain as source information for the AI tool. The storage and retention policies of these tools are not governed by a member, resulting in loss of control over data management. It is unlikely the data handling practices of these tools will align with the firm's policies, potentially leading to unintended outcomes such as data being stored and retained by third parties or in overseas locations.

4.6 Whilst not covered in this topical guidance, members will also need to consider any data handling requirements and specialist legal advice may be required. Further information on AI and data protection can be found on the Information Commissioner’s Office

Ethical risks – confidentiality

Possible safeguards

Confidential client or business data being input into a publicly available AI tool without the consent of the client or business.

Client information must not be disclosed to third parties without proper and specific authority, unless there is a legal or professional right or duty to disclose (PCRT paragraphs 2.2 and 2.16).

Members need to also consider the legal requirements on the handling of data, including both GDPR and the DPA. This should be incorporated into the policies and procedures adopted by a member/firm when using publicly available AI tools.

Consideration should be given to using AI tools that are not made for public use, and where this is not possible, any data should be anonymised.

Due care should also be given to ensuring the anonymised data does not allow for a client to be identified by other means or through a combination of other sources. Even if a client/business name is not disclosed to the public AI tool, it may be able to identify them from details which would typically be associated with the client/business eg, an uncommon service they are known to provide.

Taking steps to maintain both client/employer confidentiality and control over the handling of data in line with the policies in place can help safeguard against such risks.

Disclosing client specific data on a publicly available AI tool which also relates to a 3rd party as part of a transaction eg, a company acquisition, with details subject to an NDA.

In addition to the handling of client data, information that may relate to a third party needs to also be considered. For instance, by inputting confidential data into publicly available AI tools, a member may be disclosing details relevant to a third party.

Consideration needs to be given to the handling of this data, including both GDPR and the DPA requirements.

A client has contacted the member requesting that AI tools are not used to prepare any work in relation to their affairs.

The member should acknowledge the client’s request and may need to seek specialist advice.

The member may consider discussing with the client what they would regard as an unacceptable use of AI tools eg, using a spreadsheet software package vs a generative AI tool, and whether it is feasible to apply this to the engagement.

5. Professional behaviour

5.1 The fundamental principle of professional behaviour requires members to comply with all relevant laws and regulations, and where AI tools are used this means ensuring that any AI generated data used as part of the work completed complies with a member’s legal and regulatory obligations (see PCRT paragraph 2.23).

5.2 This principle encompasses all aspects of a member’s business dealings, including the use of AI. In order to meet the requirement to ensure work is not performed improperly, inefficiently, negligently or incompletely, a member should be aware of the limitations of any AI tools used. When adopting AI tools to improve efficiencies, members need to ensure that they both consider and meet their ethical responsibilities and legal requirements.

5.3 By understanding the limitations of the various AI tools under consideration, a member can identify the appropriate use of a tool whilst also mitigating the risk of any irresponsible use. For example, a tool may produce results that include the proposed implementation of a highly artificial or highly contrived tax scheme, or include advice where there is material uncertainty in the law.

5.4 Tax planning should be based on a realistic assessment of the facts and on a credible view of the law. The Standards for Tax Planning outlined in PCRT paragraph 3.2 cover this in further detail within the “client specific” and “advising on tax planning arrangements” standards.

5.5 Where a member identifies a tool which generates such results, additional caution should be taken when using this tool to assist in avoiding bringing the reputation of the firm and the profession into disrepute.

5.6 This also extends to the irresponsible use of AI models, such as insufficient due care being afforded to the review of any output and the handling of confidential client data as outlined above. Members are required to exercise professional judgement when using AI tools (see PCRT paragraph 3.2) as part of their work.

5.7 Members must behave with courtesy and consideration towards all whom they come into contact with in a professional capacity (PCRT paragraph 2.22). Where AI has been used to prepare correspondence, this should be reviewed to ensure that the tone and content of the correspondence is appropriate.

Ethical risks – professional behaviour

Possible safeguards

A member may seek to use an AI model to assist in preparing a tax planning report for a client.

The AI model selected incorporates a particular website into the generated response which promotes/outlines the use of an aggressive tax planning scheme which does not comply with the PCRT Standards for Tax Planning. This scheme is marketed on the website and is generic in nature rather than client specific.

Members need to ensure that work provided to a client complies with both a member’s ethical responsibilities and meets the relevant laws and regulations in place.

By reviewing the report, and analysing the output generated by the AI tool, the member is expected to identify where the proposed tax planning does not comply with these points.

The member should also be aware of the limitations of the AI tool and not provide a report to the client which has been prepared improperly, inefficiently, negligently or incompletely because of including this tax planning arrangement.

Should the member identify a tool which has generated results which are incompatible with the fundamental principles of PCRT, additional caution should be taken when using the tool for future work.

A member has received correspondence from HMRC relating to their client/employee. They have decided to use an AI tool to generate a written response to HMRC.

The tool has generated a response which includes inappropriate language and is written in an unprofessional tone.

Members must always act in a way that will not bring them or their professional body into disrepute. They must also behave with courtesy and consideration towards all with whom they come into contact in a professional capacity (PCRT paragraphs 2.21 – 2.22).

In order to safeguard against the risk of not communicating in a professional manner, the member should review the AI generated response to ensure that this is appropriate before issuing the correspondence to HMRC.

Should members have any queries in relation to professional standards or the application of PCRT when using artificial intelligence, please contact ICAEW Ethics Enquiry Line on 01908 248250 and ethics@icaew.com.

Appendix – what types of AI tools are there?

There are various tools available that can perform similar functions and achieve comparable results depending on their usage. This includes both publicly accessible and restricted versions of the same tool, as well as tools specifically developed for or by an organisation.

Even if a tax professional does not knowingly use AI tools, it can be useful to be aware of what is available. Software packages often have AI tools (or similar) embedded within them. Below are some examples of the types of tools available, though this is not an exhaustive list, and new tools may have emerged since this guidance was published:

Machine learning

This is focused on algorithms, statistical models and analysing data.

It is designed to identify patterns and make decisions based on inference in the data rather than requiring explicit instructions to function. It learns and improves from the data without being explicitly programmed. It also “learns” from the human input to make future automated decisions (see ANNs below)

It can be used in areas such as fraud detection, credit scoring and managing financial data. In the tax sector, machine learning can allow for repetitive, manual tasks to be automated. This is enabled through analysing large amounts of data quickly, collating, distilling and breaking it down into the desired format that is understandable for the end user. One such example is in the analysis of data in a spreadsheet, using tools built into the software package to provide insight and summaries of the data.

Computer vision

Computer vision is a subset of machine learning. Through the use of “deep learning” the system operates complex neural networks and helps, for example, to identify objects and people by recognising patterns and determining the content of images. A common use is in facial recognition technology used in surveillance systems.

Computer vision also utilises Artificial Neural Networks (ANNs). These are types of computing systems that are vaguely inspired by the biological neural networks that constitute animal brains. These systems “learn” to perform tasks by considering the examples presented to them, generally without being programmed with task-specific rules like some other tools.

The training of machine learning models allows specific tools which use computer vision to recognise these patterns and make predictions. A common use can be seen in the automotive industry and reviewing assembly line production processes, whilst within tax it can analyse scanned documents and make decisions based on patterns within the data.

Natural Language Processing (NLPs)

Natural language processing is a subfield of AI, which is designed to understand, interpret and then generate human language data output. It is commonly recognised for its use in chatbots and language translation, including its use in Generative-AI (Gen-AI) tools which make use of large language models (LLMs).

The LLMs are designed to understand, generate, and interact using human language. They can perform a variety of language tasks as a result of the vast amount of text data that they are trained on.

Gen-AI refers to algorithms that can generate new content eg, text, images, videos or music, which is based on their training data. Tools such as ChatGPT, Gemini, Siri and Alexa incorporate a variety of these models into their overall service. Gen-AI is a form of deep learning that generates statistically probably outputs based on the data input, and the tool seeks to understand patterns in the data to allow it to create new content.

These tools rely on extensive data sets, typically from a diverse range of resources, and can be used to assists firms in drafting and reviewing tax reports for clients, or by providing a chat bot on websites to assist with client queries.

Robotic Process Automation (RPA)

Whilst not true AI tools, RPA tools enable the use of software to automate tasks which are repetitive in nature, and are typically used where a large volume of information is required to be processed. Tax professionals may use RPA tools to process and submit a large number of tax returns through their tax filing software.

While every care has been taken in the preparation of this guidance the PCRT Bodies do not undertake a duty of care or otherwise for any loss or damage occasioned by reliance on this guidance. Practical guidance cannot and should not be taken as a substitute for appropriate legal advice.

Logos
Changelog Anchor
  • Update History
    19 Jan 2026 (09: 10 AM GMT)
    Guidance published on website.