The breakneck speed of AI development is disrupting the deal-making world. What are the guardrails and how can terms of engagement hope to hit this moving target? Jo Russell reports.
In a fast-paced technological world, the benefits of integrating AI into business processes are already accepted. When it comes to M&A, the ability to automate complex tasks and improve the quality of analysis should lead to fewer investment errors and more successful deals. Its widespread usage across industry sectors means that many client companies would assume a degree of AI input into the deal process, not least for efficiency gains. But given the speed of change and AI’s rapidly increasing capabilities, questions are being raised as to the manner in which it is used, the safeguards that are put around usage, and the extent to which either of these elements needs to be declared to a client upfront.
Our use of AI is governed by the same standards we apply to any technology
To a degree, the etiquette of AI usage is covered by existing duties of professional care and obligations to the client. “Our use of AI is governed by the same standards we apply to any technology to ensure, for example, that the rights of our clients, of individuals and of other stakeholders – such as those relating to confidential information, intellectual property and personal data – are respected and protected, and we have all necessary permissions to use the technology,” says Moyra Grant, deal advisory risk management partner at KPMG UK deal advisory.
“In terms of client data storage, we are covered by a number of professional, legal and regulatory requirements,” says Katherine Broadhurst, partner at Azets. “For client purposes, we are bound by the same overarching requirements when it comes to AI.”
Despite these safeguards, it’s important to disclose use of AI in engagement letters, Broadhurst believes. “Engagement letters cover the standard confidentiality and GDPR requirements under which we deliver all services. However, it is also important that they cover what services we will deliver to the client, and how we will deliver them, including the use of AI,” she says.
Tipping point
Standard letters of engagement often state that a firm may make use of third-party IT vendors and service providers. Strictly speaking, this could be read as incorporating generative AI systems, meaning boilerplate clauses can provide consent to the use of AI within proprietary environments in generic terms.
However, the picture becomes more nuanced when the tasks assigned to AI increase. Is it being used as a time-saving efficiency tool, freeing up advisers to add greater value, or as a means of cutting corners?
Offering services that provide clients with direct access to specific generative AI systems can be a tipping point.
Azets recently opened an internal safe environment that enables it to better leverage AI for the benefit of client work. Previously, use of AI was open source, meaning that it could only be used to further general knowledge of a subject and only publicly available information could be inputted. Development of a proprietary system means client information can now be leveraged – and the importance of this to how client services are delivered needs to be reflected in the terms of engagement.
Different firms use AI differently. There is no one-size-fits-all solution
“Now that AI can be more easily directly applied to client work, we want to define the safeguards we have in place,” says Broadhurst. “As much as anything, we want to ensure that clients understand, and are comfortable, with what we are doing, whilst reaffirming our overarching commitment to regulatory requirements for client confidentiality.”
Keeping pace
Given the fast-evolving nature of AI, it’s hard to pin down exactly what should be covered in the engagement letter. There is an awareness that as firms’ ability to use AI develops, so too must the terms of engagement. But the ongoing challenge is to keep pace with the speed of development. Any terms of engagement could quickly lose their currency.
“Every firm is trying to grapple with this, and technology is moving at pace. Furthermore, different firms are using AI differently and this results in challenges in the development of best-practice guidance. There is no one-size-fits-all solution,” says Broadhurst.
Given the confidential nature of terms of engagement, sharing best practice is unlikely; discussion of general principles regarding the development of terms is the more probable outcome.
Broadhurst points to recent press coverage of the dangers of misusing AI, and the potential risks of a lack of transparency regarding AI usage in client service delivery, including refunds being issued. As engagement letters are confidential between client and firm, we may never know. Key is whether clients consider that value is being delivered for the fees being charged, and the use of AI may impact that perception.
Ultimately, providing the best service and assurance to the client is key. Client relationships are paramount and any missteps can be hugely damaging. Beyond disclosure to clients, any use of AI needs to come with internal safeguards. Professional bodies and organisations, including the faculty, offer AI guidance and training on human oversight, to ensure thorough sourcing of data and internal transparency – throughout an organisation – regarding usage.
Safeguards extend to ensuring the correct values are instilled at the outset. “We believe that AI must be values-driven, human-centric, and trustworthy. These principles are embedded in our Trusted AI Framework, which guides how we design, build and use AI, including across client engagements,” says Grant. “By embedding the right values into AI systems and processes, we not only safeguard fundamental rights but also enhance the delivery of our services to clients.”
The rise and rise of AI adoption
ChatGPT’s rise has been meteoric since its November 2022 launch. VoxEU, the policy portal of the CEPR, reports that within months it had gathered more than 100 million users, triggering the release of several other generative AI products, such as Microsoft Copilot.
Two years after its launch, 39% of individuals reported using generative AI – nearly twice the amount that were using the internet two years after it launched (20%). A 2025 McKinsey survey reveals that organisations’ use of AI has risen from 20% in 2017 to 78%. Use of generative AI rose by nearly 40 percentage points, to 71%, in less than two years.