ICAEW.com works better with JavaScript enabled.
Generative AI Guide

Legal considerations of generative AI

Dr Sam De Silva, Partner and Global Co Head of the Commercial Practice Group at international law firm, Cameron McKenna Nabarro Olswang LLP (CMS) and member of the ICAEW Tech Board, shares his thoughts on the legal and regulatory issues you should consider when using generative AI in your organisation.

As of August 2023, various jurisdictions including the EU, UK, US and China are still in the process of developing and implementing their regulatory regimes for AI. However, even without detailed AI specific legal and regulatory obligations, organisations should consider the following: 

Confidentiality and data protection

Organisations should not put confidential information or personal data into a Generative AI tool. Entering such data into a Generative AI tool may not be permitted under data protection law, or may breach an obligation of confidentiality owed to a third party. In any event, an organisation does not have oversight of, or control over how any data entered into a Generative AI tool will be used and cannot guarantee the security of that data.

In terms of data protection, organisations must ensure that the use of Generative AI tools complies with applicable data protection laws. This means that organisations should not enter any personal data into Generative AI tools. Similar considerations apply in relation to confidential information belonging to a third party. 

Output verification

Outputs from a Generative AI tool should be checked carefully. Generative AI tools can produce an output that looks credible, but is factually inaccurate. This is because Generative AI models, generate new content based on learned patterns – they provide their ‘answer’ by choosing the word or words that seems most likely to come next in the sentence, rather than acting as a search engine.  

For example, when asked “Where does Sam De Silva work?”, ChatGPT said: “I’m sorry, but as an AI language model, I don’t have access to real-time information, and my knowledge only goes up to September 2021. As of my last update, I don’t have specific information about someone named Sam De Silva and their current workplace.” (In contrast, putting “Sam De Silva” into Google returned the CMS and LinkedIn profiles for Sam De Silva as the top two results.)

Any output from a Generative AI tool should be treated with caution and any statements included in the output should be verified using reliable sources.

There are various different liabilities which that could arise for an organisation or person using incorrect output from Generative AI. For example, such an organisation or person (i.e. the user) could be exposed to (amongst others) from a third party: a defamation claim; a claim for misrepresentation; liability for negligence; or a claim for a breach of contract. 

As there are many parties involved in a Generative AI tool (data provider, software owner, developer, user and the AI tool itself), liability is difficult to establish when something goes wrong and there are many factors to be taken into consideration, such as:

  • Were the instructions of the AI tool followed? Did the AI tool have any general or specific limitations and were they communicated to the user?
  • Was the damage caused while the AI tool was still learning?
  • Did the AI tool incorporate open source software?
  • Can the damage or loss be traced back to the development of the AI tool, or was there an error in the implementation by its user? 

It should be noted that specific rules are being formulated in different jurisdictions to deal with the liability risks posed by Generative AI tools. 

Create clear policies and provide training on responsible use

In order to promote the responsible use of Generative AI within an organisation, organisations should create clear policies and provide training on how to use Generative AI tools responsibly. Organisations should also investigate which Generative AI systems are most commonly used by individuals working for the organisation and ensure there are appropriate policies and training in place in relation to such systems.

Organisations should consider if there is a secure version of the Generative AI tool available. Rather than using a publicly available version of a Generative AI tool, it may be sensible to consider if an organisation can deploy a version of that tool on its infrastructure, without the input data being shared with the Generative AI tool provider. Any secure version of the tool will need to be subject to a security risk assessment by the organisation. 

What are the legal considerations related to the use of generative AI?

  • Existing laws and AI-specific laws 
    A patchwork of other existing laws relating to cyber and operational resilience, competition, employment, product safety, content moderation, environment, human rights and consumer protection will have application to the development and use of Generative AI to a degree, but there are likely to be gaps, given that these laws were not written with AI in mind. 

    Jurisdictions are starting to enact AI-specific laws, including the EU’s AI Act. In contrast with the EU, the UK has decided (at least for the time being) not to proceed with legislation regulating AI. Instead, the UK has proposed in a white paper that existing regulators use existing laws to regulate AI in a manner that is appropriate to the way in which AI is used in the relevant sector. The intention is to develop a context-based, proportionate and adaptable approach to regulation that supports the responsible application of AI.

    According to the white paper, these regulators will be issuing practical guidance to organisations during the next 12 months, to explain how the following five principles should be applied in the relevant sector:

    1. safety, security and robustness;
    2. appropriate transparency and explainability;
    3. fairness;
    4. accountability and governance; and
    5. contestability and redress.

    China, Canada, India and various US states are proposing and enacting AI-specific laws. While the US Blueprint for an AI Bill of Rights and Singapore's Model Artificial Intelligence Governance Framework all set out principles for responsible AI use and governance. 

  • Intellectual property and copyright considerations 
    In terms of what types of data can be used for training a Generative AI tool, while materials in the public domain may not be confidential information, they may be protected by copyright or database rights. The use of such materials for training a Generative AI tool may then constitute infringement of that copyright or database rights. 

    In terms of the output of a Generative AI tool, it is not entirely clear who should be treated as the author or the owner of the output, or what someone using a Generative AI tool is permitted to do with that output. While questions of authorship and ownership (and whether legislative changes are required to cater for AI) are being considered, anyone wishing to use the output from a Generative AI tool should check the terms of use for that particular AI tool to see what is or is not permitted.

    It is also not entirely clear the extent to which the output may be considered an infringement of any copyright works used to train the Generative AI tool and given the lack of transparency around training data, this risk is difficult to assess. Where output is considered infringing, there could be potential exposure for users of a Generative AI tool. Again, pending any legislative changes, anyone wishing to use output should check the terms of use for the particular AI tool to see if there are appropriate protections provided by the operator of the AI tool. 
  • Legal responsibility and accountability  
    Given the nature of Generative AI, it seems more likely that this will be used to assist in decision-making and providing advice. This may result in indirect harm to individuals (compared with, say, an AI-powered vehicle causing direct harm).

    In terms of legal responsibility and accountability for when the output of Generative AI causes harm to individuals or organisations, where a service provider has used Generative AI to assist in decision-making or provide advice, this may give rise to a negligence claim against the service provider. However, it remains to be seen how any such claim will be treated by the courts. In the meantime, it is important that, rather than relying on output from a Generative AI tool, the individual or organisation using that output should verify it using reliable sources. 

How can the legal challenges can be managed?

Anyone wishing to use the output from a Generative AI tool should check the terms of use for that particular AI tool to see what use is or is not permitted, as well as check if there are appropriate protections against third party IP infringement claims. 

If organisations are wanting to use materials protected by confidentiality obligations, data protection laws or intellectual property rights, they should ensure that they have licences or other contractual arrangements in place with the relevant parties, setting out clearly what that organisation can or cannot do with those materials. 

In order to mitigate the risk of negligence or other claims arising from reliance on inaccurate output from a Generative AI tool, the individual or organisation using that output should verify it using reliable sources. 

To what extent do existing laws cover the development and use of generative AI?

From a copyright perspective, in general, the author of copyright needs to be a human and so it is unclear who would constitute the author of output created using a generative AI tool. While the UK does cater for the authorship of computer-generated works, this does not answer the question entirely. 

The UK legislation says that the author of a computer-generated work is “taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”, but it is unclear in the AI context whether this would mean the individual (or individuals) involved in writing the code, training the AI system, operating the AI system or even providing the prompt or input data. Questions of authorship link to who owns the AI, how the AI can be used and who is liable for the AI, which remain unclear under existing laws. Similar questions exist in relation to whether or not AI can be the inventor under patent law. In terms of training AI, there is a question as to the extent to which copyright works can be used as part of the training dataset.

The EU has introduced a text and data mining exception in its Directive on Copyright and Related Rights in the Digital Single Market (EU) 2019/790 (“DSM Directive”), which entered into force on 6 June 2019 and was due to be transposed into national law by 7 June 2021. Article 4 of the DSM Directive contains an exception with regard to reproduction and extraction of lawfully accessible works for the purposes of text and data mining. This will allow any person or entity to carry out text and data mining for any purpose. However, this exception only applies if the rightholder of the copyright work has not expressly reserved these rights. Recital 18 of the DSM Directive specifies that, in the case of content that has been made publicly available online, rightholders can reserve these rights by machine-readable means such as by using metadata and terms and conditions on a website or a service.

In respect of other cases, the recital suggests that it “can be appropriate” for the reservation of such rights to be by contractual agreement or a unilateral declaration.  This wording is not entirely clear (in particular about how notice should be given to a potential user of a unilateral declaration). The DSM Directive does not specify whether the reservation must explicitly refer to the rights granted under Article 4 (or, for example, to “text and data mining rights”), or whether a generic “all rights not expressly granted are hereby reserved” provision would be sufficient. This is likely to ultimately be decided by the European courts, but it is possible that individual Member States may seek to clarify this when implementing the DSM Directive at a national level. It should also be noted that any reproductions or extractions made should only be retained for as long as necessary for the purpose of the text and data mining.

The UK has not transposed the DSM Directive into national law as a result of leaving the EU. Under existing UK law, there is a copyright exception for text and data analysis for non-commercial research. However, this does not cover any research which, at the time it is carried out, is intended or contemplated for a use that has some commercial value.

Following a consultation by the UK Intellectual Property Office (“IPO”) on measures to make it easier for AI to use copyright-protected material, the UK government decided to introduce a new copyright and database exception to allow text and data mining for any purpose, which would not allow rights holders to opt out.  However, a parliamentary committee recently published a report recommending that the IPO should pause the proposed introduction of the exception, conduct an impact assessment on the implications for the creative industries, and pursue alternative approaches (such as those employed by the EU) if the assessment finds negative effects on businesses in the creative industries. The Minister for Science, Research and Innovation has subsequently stated that the introduction of the exception will not be proceeding.   

Instead, the UK government has said that the IPO will produce a code of practice by summer 2023 to provide guidance to support AI firms to access copyright works as an input to their models, and ensure there are protections (eg labelling) on generated output to support copyright holders. The intention is that, if an AI firm commits to the code of practice, it can have a reasonable licence offered by a rights holder in return. If the code of practice is not agreed or adopted, legislation may be needed instead. As it stands, however, there is no exception or guidance in the UK permitting the use of copyright works for training data in a commercial context.

Existing laws on data protection, confidential information and trade secrets will, in the absence of any AI-specific exceptions, govern the extent to which such personal data, confidential information or trade secrets can be used as training data. In addition, the GDPR (both in the EU and as retained in the UK) set out rules around automated decision-making, including profiling. 

Legal challenges associated with foundation models and how to manage them

While not all Generative AI are foundation models, many are and so the considerations discussed earlier will apply. 

With the vast amounts of data on which foundation models are trained, these training datasets will be too large to be reviewed manually. Given the computational requirements associated with foundation models, this also gives rise to questions around environmental harms and competition (as only big tech can satisfy the computational requirements).  

Foundation models are general purpose (as opposed to narrow AI), which can give rise to questions around risks and liability. Does responsibility lie with the developer of the foundation model (who has control over the foundation model’s dataset), or those using the foundation model to develop a specific application (who has control over the specific context and application, but may not be able to fix or audit issues)? 

In terms of managing the challenges associated with foundation models (and AI more generally), as mentioned above, one option is to regulate through legislation (such as the EU’s AI Act) or through guidance (with the UK proposing that existing regulators create guidance specific to the relevant regulated sector or industry). 

 

ACKNOWLEDGEMENT:

The author would like to thank Sarah Hopton, Senior Associate at CMS, for her assistance for undertaking the research related to this section.

 

DISCLAIMER:

Please note: These responses are not intended to constitute legal advice. Specific legal advice should be sought before taking or refraining from taking any action in relation to the matters mentioned in this note. 

You may also be interested in

Elearning
Finance in a Digital World - support for ICAEW members and students on digital transformation and technology
Finance in a Digital World

ICAEW has worked with Deloitte to develop Finance in a Digital World, a suite of online learning modules to support ICAEW members and students, develop awareness and build understanding of digital technologies and their impact on finance.

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
ICAEW Community
Abacus
Excel

Do you use Excel in your organisation? Are you using it to its maximum potential? Develop your skills and minimise spreadsheet risk with our Excel resources. Join the Excel Community.

Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250