As pressure to regulate artificial intelligence (AI) mounts, the Financial Conduct Authority (FCA) recently held a three-day in-person event as part of its AI Labs to help inform the regulator’s approach to the use of AI in financial services.
The FCA’s AI Lab was launched at the end of 2024 to help firms navigate the challenges of AI adoption. It provides a hub for collaboration between the FCA and its stakeholders to access relevant insights, discussion and case studies on the adoption of AI.
With that ambition in mind, the AI Lab includes four components: AI Spotlight, AI Sprint, AI Input Zone and Supercharged Sandbox.
AI Spotlight
The first component, the AI Spotlight, took place on 28 January, showcasing a number of projects that provide real-world insight and practical understanding of how firms are experimenting with AI in financial services. A number of interesting and novel presentations followed (including one presented wholly by an AI avatar), but there were two that stood out:
- Technology using AI to help spot deepfakes. This feels imperative, following a raft of deepfake scams resulting in financial losses for companies, including one in Hong Kong where a member of staff at Arup paid out $20m to scammers after video call with deepfake ‘chief financial officer’.
- AI used to score based on the risk of collapse by analysing corporate financial filings to identify red flags. As a chartered accountant and ex-auditor, the premise to use AI to help predict the next collapse is incredibly intriguing and could help auditors get a better grip of the going concern risks.
AI Sprint
The second component of the AI Lab – an AI Sprint held 29-30 January – brought together industry, academics, regulators, technologists and consumer representatives to focus on the strategic, regulatory, and practical implications of AI to help inform the FCA’s regulatory approach to the use of AI in financial services. For those unfamiliar with the concept, a sprint sets an intensive set of tasks set over a set period of time to achieve an aim.
We were mixed up and put into groups to minimise the potential for groupthink. It worked well; although my group failed to agree on a single thing, our recommendations were all the better having been so robustly debated and challenged within the group by experts from various fields.
We were tasked with thinking about how AI will accelerate and impact financial services over the next five years, including key use cases that are likely to have emerged or developed and conditions that would help us to enable safe and responsible AI adoption.
We subsequently debated in depth the existing regulatory framework and whether changes could be made to enable the opportunities for beneficial innovation to flourish while mitigating the risks, followed by an afternoon of each group presenting back to the FCA and others on their findings from different aspects of the past two days.
My group was tasked with presenting on potential new systemic risks affecting market stability resulting from the use of AI in financial services and recommendations on how the regulator could address these risks efficiently.
We concluded that, although the use of AI in capital markets potentially exacerbated a lot of existing systemic risks and risk for market manipulation – for example, herding, concentration model risks and manipulation/collusion – existing rules around algorithmic trading can be extended to address many of these issues.
A bigger issue – and arguably one that is much harder to regulate against – is the risk posed by the proliferation of market misinformation. We used the example of an AI-generated spoof of an attack on the Pentagon in August 2023 rattling the markets. Given the increasing quality of production coupled with lower cost and the lack of traceability, this is increasingly going to be an issue for market stability, but how does one regulate market information that is so decentralised?
Our group came up with three recommendations and ranked them on a scale of impact and achievability:
- Our first recommendation, which ranked high on impact but low on achievability, was the oversight of big tech – increase the scope of the regulator to include financial misinformation spread through social media platforms.
- Our second recommendation, ranked mid-way for both impact and achievability, was to increase supervisory tech capabilities, for example, real-time monitoring of market-moving information.
- Our final recommendation, which ranked mid-impact but most achievable - was regulatory cooperation with other organisations, such at the Office of Communications (Ofcom)/the Information Commissioner's Office (ICO)/the National Cyber Security Centre through initiatives such as Online Safety Bill/Digital Identity. It is worth noting that the FCA is already part of the Digital Regulation Cooperation Forum with Ofcom, the Competition Markets Authority and the ICO.
I personally found the whole experience of the AI Sprint worthwhile and a good example of how the regulator can continue to engage with the industry and focus on innovation to overcome potential issues in nascent areas, without losing sight of its growth agenda.
AI Input Zone and Supercharged Sandbox
There are two further components of the AI Lab. The AI Input Zonewas open 4 November 2024 to 31 January 2025 to allow stakeholders to have their say on the future of AI in UK financial services through an online feedback platform.
Finally, plans are also afoot for firms to be invited to a series of TechSprints and Supercharged Sandbox to test their AI capabilities. The Digital Sandbox will have an enhanced infrastructure through greater computing power and enriched datasets. The FCA has also published an AI Sprint Summary.
The growth of AI in the UK cannot happen in a vacuum – it requires a collaborative approach where industry engagement and regulatory oversight go hand in hand. By maintaining an open dialogue with industry through avenues such as the AI Lab, the regulator can create a proportionate framework that provides the necessary scaffolding for AI to thrive, ensuring the UK remains at the forefront of AI advancements while protecting consumers and the broader economy.
Polly Tsang, Senior Financial Services Regulatory Manager at ICAEW
AI Assurance Conference
ICAEW is bringing together assurance providers, businesses, policymakers and academics to explore the role of AI Assurance in promoting responsible AI adoption and innovation.