Highlights
Build your knowledge
Explore the real world meaning of AI Assurance, including misconceptions and expectation gaps
Expand your network
Engage with policy makers, regulators, assurance providers and fellow accountants in shaping the future of AI Assurance.
Gain practical insights
Hear practical advice and tips from experts on providing and getting AI Assurance services, and on auditing clients who are using AI.
Navigate Complexities
Discover the challenges associated with assuring foundation models and potential ways to address them.
Skills development
Participate in discussions about the types of skills needed for AI Assurance professionals, and how to develop them.
Programme
Please note that the programme is subject to change.
- Registration
- 09:00
-
Debate - regulation as a driver for demand
Is regulation necessary to facilitate AI Assurance and responsible adoption? This session will debate the pros and cons of regulation as a driver for demand of assurance services, considering the EU, UK and US environments, with contributors arguing both for and against regulation. Joining the discussion Jacob Turner from Fountain Court Chambers, Chris Thomas at The Alan Turing Institute and Tim Gordon from Best Practice AI.
- 11:35
-
Welcome
Esther Mallowah, Head of Tech Policy, ICAEW
- 09:30
- Lunch
- 12:45
-
Opening keynote
Markus Anderljung, Director of Policy and Research, Centre for the Governance of AI
- 09:35
-
What is AI Assurance?
The term "AI Assurance" is used in many ways to mean different things, often leading to confusion and miscommunication. This session will provide a brief introduction to the topic, followed by a panel discussion with key players discussing their practical experience of providing and buying assurance, including their understanding of assurance, scope of work, limitations, and expectation gaps. Joining this panel are Tim McGrarr from British Standards Institution, Professor Lukasz Szpruch from The Alan Turing Institute, Emre Kazim from Holistic AI, Mark Cankett from Deloitte and the ICAEW’s Esther Mallowah.
- 09:45
-
Practical case studies - assurance as an enabler of responsible adoption
The potential for AI to improve efficiency and accelerate business growth is well recognised. However, adoption has been slowed down by concerns around the risks presented by AI including lack of transparency and explainability of models, bias, and accountability. This session will include Frank De Jonghe, EY; Rachel Kirkham, Mindbridge AI and Giles Pavey from Unilever, chaired by Neil Christie from Scan.
- 10:30
- Break
- 11:15
-
Getting started with AI Assurance
The AI Assurance industry presents new business opportunities, with the UK Department for Science Innovation and Technology has projected that the UK assurance market, could move beyond £6.53bn by 2035. Accounting firms are already becoming key players in this area. This session will discuss considerations for accountants and businesses looking to expand their services into this new area, with recommendations on how to get started. The session will include Leigh Bates from PwC, Tessel van Oirsouw from EthicAI, and ICAEW’s Simon Gray.
- 12:15
-
Optional lunch session: building trust in audit analytics
As AI becomes central to audit analytics, how do we build trust in the algorithms behind the tools? This session brings together MindBridge and Holistic AI to share how an independent algorithm audit was performed—and why it matters. Hear insights on what was tested, how assurance was done, and what it means for firms relying on AI. Whether you're building or using audit tech, come learn practical ways to raise the bar on algorithm transparency and governance. Speaking in this session are Rachel Kirkham from Mindbridge AI and Emre Kazim, from Holistic AI.
- 12:45
-
Preparing for EU AI Act Conformity Assessments
The EU AI Act requires conformity assessments for High-Risk AI Systems (HRAIS). This session will bring together businesses, consultants and assurers to discuss requirements for businesses, challenges identified, and lessons learned so far in preparing for these assessments. Joining the panel are Pauline Norstrom, Anekanta AI, Tim McGarr from British Standards Institution, William Dunning from Simmons & Simmons and ICAEW’s Esther Mallowah.
- 13:45
-
Financial Audits of clients using AI
More and more audit clients are beginning to incorporate Generative AI into internal processes, including in financial reporting. The complexity of Generative AI can present risks that challenge the way auditors have traditionally conducted audits. This session will use a couple of case studies to explore the considerations for auditing a client who has embedded generative AI into their financial processes, and the types of activities that may be considered as part of the audit. Joining the panel are Gareth James from EY, Ramana McConnon, from the FRC, Peter Beard from GenFinance.AI, chaired by ICAEW’s Alex Russell.
- 14:15
- Break
- 15:00
-
Getting assurance over foundation models
Third party foundation models underpin the majority of AI tools used by businesses. However, getting assurance over such models can be difficult, with challenges including not having access to necessary information. This session will explore options for assurance of such models, including how to address challenges, the work of the newly renamed UK AI Security Institute, and the potential to leverage third party reports. Joining the session is Franki Hackett from Grant Thornton.
- 15:20
-
Developing the skills for AI Assurance
With so many different types of AI assurance activities, what are the vital skills for AI Assurance? How can the UK build such skills and what should business be doing? This panel will discuss these topics, as well as the practical considerations for the development of an AI Assurance profession. The panel will include Emily Campbell-Ratcliffe from the Department for Science, Innovation and Technology, Piers Clinton-Tarestad from EY and Esther Mallowah, ICAEW.
- 16:00
-
Closing remarks
Malcolm Bacchus, ICAEW President
- 16:40
- Networking drinks
- 16:50
- Event close
- 18:00
Our speakers
Malcolm qualified as an ICAEW Chartered Accountant in 1981. He joined the ICAEW Council in 2005 and was elected President in 2024. He has worked in industry for the majority of his professional life, mainly as CFO of AIM-listed companies, across a wide range of sectors including property, childcare, mining, manufacturing, telecommunications, and construction. Amongst many other committee roles across ICAEW and its London District Society, he served as a member of the ICAEW Ethics Standards Committee from 2015 and was its Chair from 2018 to 2022. He is currently a member of HMRC's Administrative Burdens Advisory Board and of the Court of the Worshipful Company of Chartered Accountants in England and Wales.
Emily works to support the implementation of DSIT’s AI Opportunities Action Plan and unlock the opportunities of AI across the economy, whilst ensuring any associated risks are managed through the development of an ethical, trustworthy, and effective AI assurance ecosystem – a key pillar of the UK’s AI governance framework. She represents the UK at the OECD’s Working Party on AI Governance, and is also part of the OECD.AI network of experts, contributing to their Expert Groups on AI Risk & Accountability and Compute & Climate. Last year Emily was named as one of the Government AI 100.
Piers is a Technology Risk Partner at EY focusing on emerging technologies. He has been working on AI Assurance since 2017 when he co-developed EY’s initial global approach and has overseen delivery of a significant number of reviews and regulatory readiness projects at a number of clients (including EY). He is a FCA, CISSP, MBA and has served on a number of boards and Audit Committees including previously both on the ICAEW Technical Faculty and as Vice-Chair of the ICAEW Audit Committee.
Frank trained as a theoretical physicist, and he has been active as an expert supporting audits and as risk consultant for over 25 years. Building on his experience in modelling audits and model governance, Frank has been developing approaches to challenging and validating AI systems. A.o., in 2022 he helped a European Non-Profit design its audit procedures supporting their Ethical AI label.
Rachel is responsible for leading our product strategy, data science work and our risk scoring approach. Prior to joining MindBridge, Rachel was the Head of Data Analytics Research at the UK National Audit Office, leading a team of data scientists and analysts in developing unique data analytics capabilities for financial audit. Rachel is an ACA-qualified chartered accountant and holds an MBA from Imperial College.
Emre has led Holistic's efforts to accelerate AI transformation. He has an extensive background in AI ethics and governance and has published more than 50 peer-reviewed articles in collaboration with government and industry leaders. Before founding Holistic AI, Emre was part of the University College London’s computer science department responsible for developing an interdisciplinary response to the issue of AI ethics, through which he met Adriano, his fellow co-founder and co-CEO. He is an active member of the NIST AI Safety Institute and a member of OECD's Network of Experts on AI.
Alex leads the Audit and Assurance Faculty, which produces guidance, technical releases, webinars and other content for members, as well as thought leadership on issues affecting the profession, such as audit and corporate governance reform.
Pauline is the CEO of Anekanta®AI and Anekanta®Consulting, which provide AI strategy, literacy and governance services aligned with the requirements of the EU AI Act and international pending regulations. The Company operates across sectors including facilities management, security, transportation, aviation, retail, and education. She previously held executive board roles in large international industrial video and high-risk AI technology companies. As leader in Responsible AI Pauline has driven good practice for over 20 years; recently focused on facial recognition guidance, standards and regulation, also AI policy and governance frameworks for Directors.
Tim co-founded Best Practice AI to educate and advise on AI strategy and governance. BPAI published the world’s largest directory of AI use cases, have worked with the World Economic Forum on their first AI Board Leadership Toolkit and co-produced the world’s first AI Explainability Statements under GDPR. Tim also co-founded Salus AI which uses AI for sales compliance checking. He has worked at the Boston Consulting Group, served on the board of a private equity-backed business, advised the Deputy Prime Minister of the UK and is a Trustee at Full Fact.
Gareth leads EY's UK Data & Intelligence Delivery CoE, comprising over 200 team members who specialise in supplying the data, technology and innovation needs of EY’s audit and assurance engagements. Working with data of any size from any system, the team encounters, deploys and innovates with emerging technologies, including AI. EY itself uses AI extensively, integrated into global platforms, embedded in discrete tools and as everyday working life CoPilots. EY’s Responsible AI Framework governs this deployment as well as being the backbone to EY’s audit response to companies using AI.
Esther influences Technology policy to enable businesses, accountants and society to harness its benefits while limiting harm. Prior to joining ICAEW, she worked in technology internal and external audit roles, focussing on information and cyber security and operational resilience. She is also a qualified ICAEW Chartered Accountant, qualifying whilst at Deloitte.
Ramana McConnon is the Head of Assurance Technology in the Audit & Assurance Policy team at the FRC, where he is responsible for the FRC’s policy and technical work on technology and AI in audit. Previously at the FRC, he has overseen the creation of the Audit & Assurance Sandbox and the publication of guidance on professional judgement. Prior to this, Ramana spent a number of years in audit at a Big 4 firm.
William is a lawyer in Simmons & Simmons’ disputes and investigations team in London. He routinely advises major corporations on global AI regulatory compliance affairs. William is an expert in AI law and regulation and is ranked by Legal 500 as a ‘Leading Associate’ for Artificial Intelligence. William has written extensively on legal issues relating to AI, including contributing chapters to leading AI law books. He also speaks regularly on AI law at industry events and educational institutions.
Franki leads on AI implementation in audit at Grant Thornton UK and maintains an academic sideline researching country-by-country tax reporting and open databases. Previously, Franki was Head of Audit and Ethics at Engine B, and worked before that as Head of the Data Analytics Research Team at the UK National Audit Office. Franki is a trained data scientist and auditor with a background in political economy. She chairs the ICAEW Data Analytics Community Group and sits on the ICAEW Ethics Advisory Committee.
Tim McGarr is the AI Market Development Lead in BSI Regulatory Services responsible for understanding new AI markets and developing AI services, including ISO/IEC 42001 certification and services related to the EU AI Act. Prior to this Tim was the Sector Lead for the Digital area within Knowledge Solutions in BSI. Tim has been working at BSI since 2009. Prior to BSI, Tim he spent 5 years working in the legal publisher LexisNexis in the strategy department. Before this, Tim he worked as a management consultant. Tim has an MBA from HEC in Paris, France.
Simon is an ICAEW Chartered Accountant, who qualified with KPMG in Nottingham. After many years of volunteering, including District Society President, Chair of the Business Committee and Council Member, he joined ICAEW as Head of Business in December 2021. His background is in the recruitment, careers, membership and inward investment sectors. Simon has started and run his own businesses and served as a Non-Executive Director.
Tessel is a co-founder at EthicAI, where she leads the AI governance and policy work. She holds a master’s degree in AI Ethics and Society from the University of Cambridge and was previously a Visiting Fellow at the Allen Lab for Democracy Renovation at Harvard Kennedy School. She has worked on AI policy initiatives in various settings, including the European Commission and the Netherlands Scientific Council for Government Policy.
Giles works to maximise the benefits from AI and advanced analytics across the business. He works around the world on projects as diverse as supply chain, AI Ethics, digitising factories, innovation, sustainability and understanding human behaviour. Giles has a passion for collaboration, notably between academia and industry. As such he holds visiting positions in Computer Science at University College London and Mathematics at Oxford University.
Markus leads research at GovAI with a focus on how governments, AI companies, and other stakeholders can manage a transition to a world with advanced AI. He is currently serving as one of the Vice-Chairs drafting the EU's Code of Practice for General Purpose AI, and was previously seconded to the UK Cabinet Office as a Senior AI Policy Specialist, advising on the UK's regulatory approach to AI. He is also an Adjunct Fellow at the Center for a New American Security and a member of the OECD AI Policy Observatory's Expert Group on AI Futures.
At Turing, Lukasz provides academic leadership for partnerships with the National Office for Statistics, Accenture, Bill and Melinda Gates Foundation and HSBC. He actively engages with the finance industry and regulators both nationally, FCA, BoE/PRA, ICO, and internationally AMF, AFM, and MAS. His research interests span the mathematical foundation of machine learning (including generative AI, explainability, privacy, transparency, validation and verification), game theory and multi-agent systems, quantitative finance and web3 technologies. Lukasz has been working on all aspects of synthetic data, the development of digital sandboxes and AI Assurance and have active projects in these areas with ONS, HSBC and Accenture.
Neil qualified at PwC as a chartered accountant and moved to industry within the technology sector. Having spent the last 15 years running cloud infrastructure and managed services organisations, with the last 2 years focusing heavily on the implementation of AI services, Neil has a broad experience of the challenges faced when designing and implementing solutions to integrate complex software applications into business processes.
Jacob is the author of ‘Robot Rules: Regulating Artificial Intelligence’ and a joint author of ‘The Law of Artificial Intelligence’. He has acted in some of the world’s most significant AI-related cases and defending the first company investigated by the UK data protection regulator for alleged AI bias. Jacob is on the Attorney-General’s Panel of Counsel and has advised DSIT and AISI on various aspects of AI regulation. He has been described in the Legal 500 directory as ‘the leading barrister in the AI space’ and is listed by Chambers and Partners as a ‘Global Leader in AI’.
Leigh Bates is a Partner at PwC where he leads the firm’s initiatives in AI Trust and Transformation, and serves as the Global Risk AI Leader. With over 25 years' experience, he specialises in AI risk governance and responsible AI adoption across industries. Leigh has led industry-leading transformations in Risk Management, Financial Crime Compliance, and Technology risk and resilience and is committed to empowering organisations to harness the full potential of Artificial Intelligence and Data Analytics with confidence.
Peter Beard, ACA BFP, is the Founder & Director of GenFinance.AI, focused on putting AI into the hands of finance and accounting teams, serving as their go-to Subject-Matter Expert on AI products, services, and agents. With over eight years of experience in accounting and Audit at Ernst & Young (EY), he managed a range of audits (including FTSE and EURO Stoxx 50-listed). Having also contributed to EY's Global AI research and leading technology transformation projects, Peter received awards by EY UK, including for his "Significant contributions to innovation initiatives" and "Exceptional technical proficiency and quality commitment".
Our partner
MindBridge
MindBridge enables audit and accounting teams to analyse 100% of transactions with transparent, explainable AI—improving accuracy, reducing blind spots, and strengthening internal controls. By detecting risks, errors, and anomalies in real-time, MindBridge helps firms deliver deeper assurance and meet regulatory requirements with confidence.
Find out moreSee all our events
ICAEW runs a variety of events in support of financial professionals in business and practice. From full-day conferences offering the latest updates for specific sectors to webinars offering support on technical areas and communication skills. There are hundreds of learning opportunities available.