ICAEW.com works better with JavaScript enabled.

Sprinting towards AI regulation in financial services

Author: ICAEW Insights

Published: 29 Apr 2025

Moves to regulate artificial intelligence are progressing. Following a competitive selection process, ICAEW’s Polly Tsang participated in the Financial Conduct Authority’s (FCA) inaugural AI Sprint. She talks us through the process.

As pressure to regulate artificial intelligence (AI) mounts, the Financial Conduct Authority (FCA) recently held a three-day in-person event as part of its AI Labs to help inform the regulator’s approach to the use of AI in financial services. 

The FCA’s AI Lab was launched at the end of 2024 to help firms navigate the challenges of AI adoption. It provides a hub for collaboration between the FCA and its stakeholders to access relevant insights, discussion and case studies on the adoption of AI.

With that ambition in mind, the AI Lab includes four components: AI Spotlight, AI Sprint, AI Input Zone and Supercharged Sandbox. 

AI Spotlight

The first component, the AI Spotlight, took place on 28 January, showcasing a number of projects that provide real-world insight and practical understanding of how firms are experimenting with AI in financial services. A number of interesting and novel presentations followed (including one presented wholly by an AI avatar), but there were two that stood out: 

  • AI used to score based on the risk of collapse by analysing corporate financial filings to identify red flags. As a chartered accountant and ex-auditor, the premise to use AI to help predict the next collapse is incredibly intriguing and could help auditors get a better grip of the going concern risks. 

AI Sprint 

The second component of the AI Lab – an AI Sprint held 29-30 January – brought together industry, academics, regulators, technologists and consumer representatives to focus on the strategic, regulatory, and practical implications of AI to help inform the FCA’s regulatory approach to the use of AI in financial services. For those unfamiliar with the concept, a sprint sets an intensive set of tasks set over a set period of time to achieve an aim. 

We were mixed up and put into groups to minimise the potential for groupthink. It worked well; although my group failed to agree on a single thing, our recommendations were all the better having been so robustly debated and challenged within the group by experts from various fields. 

We were tasked with thinking about how AI will accelerate and impact financial services over the next five years, including key use cases that are likely to have emerged or developed and conditions that would help us to enable safe and responsible AI adoption. 

We subsequently debated in depth the existing regulatory framework and whether changes could be made to enable the opportunities for beneficial innovation to flourish while mitigating the risks, followed by an afternoon of each group presenting back to the FCA and others on their findings from different aspects of the past two days. 

My group was tasked with presenting on potential new systemic risks affecting market stability resulting from the use of AI in financial services and recommendations on how the regulator could address these risks efficiently. 

We concluded that, although the use of AI in capital markets potentially exacerbated a lot of existing systemic risks and risk for market manipulation – for example, herding, concentration model risks and manipulation/collusion – existing rules around algorithmic trading can be extended to address many of these issues. 

A bigger issue – and arguably one that is much harder to regulate against – is the risk posed by the proliferation of market misinformation. We used the example of an AI-generated spoof of an attack on the Pentagon in August 2023 rattling the markets. Given the increasing quality of production coupled with lower cost and the lack of traceability, this is increasingly going to be an issue for market stability, but how does one regulate market information that is so decentralised?

Our group came up with three recommendations and ranked them on a scale of impact and achievability:

  • Our first recommendation, which ranked high on impact but low on achievability, was the oversight of big tech – increase the scope of the regulator to include financial misinformation spread through social media platforms. 
  • Our second recommendation, ranked mid-way for both impact and achievability, was to increase supervisory tech capabilities, for example, real-time monitoring of market-moving information. 
  • Our final recommendation, which ranked mid-impact but most achievable - was regulatory cooperation with other organisations, such at the Office of Communications (Ofcom)/the Information Commissioner's Office (ICO)/the National Cyber Security Centre through initiatives such as Online Safety Bill/Digital Identity. It is worth noting that the FCA is already part of the Digital Regulation Cooperation Forum with Ofcom, the Competition Markets Authority and the ICO. 

I personally found the whole experience of the AI Sprint worthwhile and a good example of how the regulator can continue to engage with the industry and focus on innovation to overcome potential issues in nascent areas, without losing sight of its growth agenda.

AI Input Zone and Supercharged Sandbox

There are two further components of the AI Lab. The AI Input Zonewas open 4 November 2024 to 31 January 2025 to allow stakeholders to have their say on the future of AI in UK financial services through an online feedback platform.

Finally, plans are also afoot for firms to be invited to a series of TechSprints and Supercharged Sandbox to test their AI capabilities. The Digital Sandbox will have an enhanced infrastructure through greater computing power and enriched datasets. The FCA has also published an AI Sprint Summary

The growth of AI in the UK cannot happen in a vacuum – it requires a collaborative approach where industry engagement and regulatory oversight go hand in hand. By maintaining an open dialogue with industry through avenues such as the AI Lab, the regulator can create a proportionate framework that provides the necessary scaffolding for AI to thrive, ensuring the UK remains at the forefront of AI advancements while protecting consumers and the broader economy.

Polly Tsang, Senior Financial Services Regulatory Manager at ICAEW

AI Assurance Conference

ICAEW is bringing together assurance providers, businesses, policymakers and academics to explore the role of AI Assurance in promoting responsible AI adoption and innovation.

You may also be interested in

Resources
Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.
Technology

Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.

Read more
Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon

Add Verified CPD Activity

Introducing AddCPD, a new way to record your CPD activities!

Log in to start using the AddCPD tool. Available only to ICAEW members.

Add this page to your CPD activity

Step 1 of 3
Download recorded
Download not recorded

Please download the related document if you wish to add this activity to your record

What time are you claiming for this activity?
Mandatory fields

Add this page to your CPD activity

Step 2 of 3
Mandatory field

Add activity to my record

Step 3 of 3
Mandatory field

Activity added

An error has occurred
Please try again

If the problem persists please contact our helpline on +44 (0)1908 248 250