Key takeaways
- Used correctly, AI agents can help to automate front and back office tasks.
- However, as they can operate without human input, there are risks that businesses need to be aware of.
- The DRCF report identifies seven areas that businesses should look at when using AI. agents, including fragmented accountability and data protection.
The DRCF’s The Future Of Agentic AI: Foresight Paper set out to identify how AI agents interact with regulatory compliance. As a result, it uncovered significant compliance risks associated with AI agents.
AI agents act autonomously, which can become problematic without adequate transparency or human oversight. The DRCF report aims to facilitate discussion around the potential issues, without acting as an indication of current or future policy from regulators.
Prefer to listen?
This audio file was produced by AI and is based on the Insights article above on succession planning.
The four regulators that make up DRCF – the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), Ofcom and the Competition and Markets Authority – agree that AI agents do not fall outside existing UK regimes. “Obligations around transparency, fairness, safety, consumer protection and competition continue to apply as agentic AI develops."
“The move toward agentic AI engenders opportunities and risks for consumers and the economy,” the report states. “In terms of opportunities, consumers could benefit significantly, for example, from agents that would handle ‘life admin’ (eg, booking holidays, renewing policies), potentially reducing friction and improving accessibility. Businesses could potentially realise significant productivity gains in both customer engagement (front office) and internal operations (back office), such as automated reporting and triage.”
In the workplace, the report highlights more specific use cases such as chaining steps across enterprise systems or pulling data from multiple sources to monitor business critical metrics, such as food spoilage.
“Adoption of AI agents and agentic AI systems is on the rise, with many organisations, including accounting firms, recognising their potential to improve efficiency and quality,” says Esther Mallowah, Head of Tech Policy at ICAEW. “However, as seen in recent months, they can create significant risks, particularly around data security and privacy.”
Alongside several opportunities and use cases for agentic AI, the report identifies seven main areas of compliance risk exposure for businesses and organisations using AI agents, including accountancy firms, and how to address them.
1. Fragmented accountability
The "many hands problem" makes it difficult to establish responsibility when errors occur, as accountability may be split between model providers, system providers and downstream deployers.
The report notes that different parts of the value chain all have a role to play in mitigating risks. “Model providers may build foundational infrastructure for monitoring, logging, and executing emergency shutdowns; system providers may adapt these tools to context-specific risks; and downstream deployers may implement oversight and reporting mechanisms during operation.”
2. Vendor lock-in
As agents become more widely used, organisations could inadvertently become tied to the infrastructure of a single provider. As a result, they become completely dependent on that provider, reducing interoperability.
On the flip side, agentic AI use could potentially reduce the risk of vendor lock-in if used effectively, acting as the connecting point between different systems.
3. "Black box" decision-making risks
Without robust oversight, multi-agent systems risk becoming opaque "black boxes", which refers to AI systems where the internal decision-making processes are opaque and difficult for users, deployers and regulators to understand or trace.
This lack of transparency may lead to non-compliance with consumer, contract and data protection laws, because it can make contesting decisions, determining how decisions were made, and tracing data sharing difficult.
4. Data protection and privacy
Agents often require access to large volumes of personal and operational data. This can lead to infringements of UK GDPR, particularly regarding data minimisation when there is a temptation to provide agents with unfettered data access to improve performance. Additionally, automated decision-making involving rapid execution of multi-step workflows may undermine a user's ability to provide informed consent.
The report advises that the data minimisation principle “requires organisations to use only the data necessary for the specific processing purpose.”
The report recommends transparency about how personal data is used by agentic systems, adding that it’s critical to building consumer trust and avoid systemic privacy vulnerabilities.
The report added that agentic AI “may enhance cybersecurity by helping firms and consumers address complex threats.”
5. Algorithmic collusion
AI agents may spontaneously learn to coordinate outcomes or exchange commercially sensitive information without explicit instruction from their human developers.
6. Cybersecurity vulnerabilities
Agents may be granted excessive permissions that attackers could exploit, creating expanded "attack surfaces" – the combination of weak points – from where a malicious actor could try to enter or extract data from an AI system.
7. Financial services compliance
For financial services businesses, using AI agents to price products or triage claims requires demonstrating compliance with the FCA’s Consumer Duty, which sets out high standards to protect financial services customers. This is to ensure that automated actions still deliver good outcomes for clients.
“AI agents could amplify existing generative AI risks and introduce new ones. DRCF regulators remain alert to these risks. The Information Commissioners Office, Financial Conduct Authority and Ofcom have cybersecurity requirements of firms, drawing from guidance from the National Cyber Security Centre,” the report found.
It recommends the use of guardrails, data controls and human-in-the-loop checkpoints to keep agent activities in line with consumer protections. As a result, some financial services firms are applying agentic AI to customer support and fraud detection tasks.
The report’s conclusions
Regulators emphasise that despite an agent's degree of autonomy, the deploying organisation remains legally responsible for compliance. To mitigate these risks, the report suggests using traceable logs, human-in-the-loop checkpoints, transparency agents and the mapping of systems to maintain audit trails.
DRCF’s Thematic Innovation Hub launched with a focus on agentic AI. Speaking at the launch, Kate Jones, DRCF’s CEO, said that “agentic AI brings practical, high-level challenges that are showing up fast, and growing as AI becomes increasingly embedded in our everyday life.”
“Agents operate at machine pace while assurance runs at human speed, liability becomes blurry as agents make autonomous decisions, and user literacy becomes crucial to ensure consumers understand risks,” says Jones. “But it also comes with exciting opportunities – it can provide an incredible productivity boost for employees, empower consumers by providing personalised and real-time assistance, enable smaller firms to do more with less, unlock new levels of knowledge and so much more.”
Mallowah adds, “There is a well-founded concern about regulation keeping up with technological developments, and it is positive to see the DRCF regulators take the step to explore agentic AI and its practical implications for the sectors they regulate”.
“While not prescriptive, this foresight paper is helpful in educating both regulated organisations and the wider public, and is a much-needed step in empowering organisations and consumers to make conscious and informed decisions on agentic AI.”
Accounting Intelligence
This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.