The UK government has outlined three potential quality assurance models for the third party AI assurance market as part of its new roadmap for developing the AI assurance sector.
AI assurance is becoming more in demand as organisations adopt the technology in a bigger way, but questions remain about what good-quality AI assurance should look like. The Trusted third-party AI assurance roadmap sets out the first steps that the government will take to improve quality and encourage growth of the AI assurance market.
Firms are already offering third-party AI assurance in the UK. The Department for Science, Innovation and Technology (DSIT) engaged with AI assurance providers, industry, regulators and quality assurance experts to inform its roadmap. It found that quality, talent and skills shortages, access to information and keeping pace with the rate of change when it comes to AI capabilities are huge challenges for AI assurers.
The roadmap aims to address these challenges with the suggested models for quality assurance, a plan to develop relevant skills, improved access to information and funding for innovation.
Three models for quality assurance:
Professionalisation of the AI assurance industry
The introduction of a professional certification or professional registration for AI assurance. The former provides an opportunity for individuals to develop knowledge and expertise in a subject matter, delivered by accredited trainers. The latter – an assessment of an individual’s skills and experience against professional standards – would be granted by a regulated industry authority.
Professionalisation could provide assurance firms with a way of demonstrating the credentials of their employees, in turn increasing consumer confidence. It would also give a clear career development path for people interested in AI assurance.
It does, however, rely on developing a wider market of qualification design and delivery, requiring training and qualifications providers to enter the market. There are also gaps in the potential scope for qualifications, as AI use and capabilities are developing rapidly. This includes how it interacts with other qualifications and certifications. As a result, the government suggested that it may be premature to put a regulated industry body and professional standard in place.
Process certification
Instead of certifying professionals, process certification verifies the quality of specific assurance processes that an AI assurer might use, such as risk assessment or bias audit. The roadmap provides an example of this: “If an assurance provider conducts a technical audit that involves testing the performance of a model, they could obtain a certification for performance testing to demonstrate the quality of their auditing service.”
This would quickly introduce some standardisation of assurance processes to the AI assurance market, improving consistency. This would be underpinned by global technical standards, which potentially opens up international markets.
On the other hand, it can be costly for firms, which may create a disadvantage for smaller providers. The government compared it to the approval process for certifying cyber-security products, processes and systems, where the length of time to achieve certification, and the costs associated with it, proved a barrier for many firms.
And again, the nascency of the AI market means that it’s difficult to set standards for assurance processes for certification, which may end up being too static in a rapidly changing environment.
Accreditation
Accreditation for organisations would involve an assessment to confirm competency, impartially and consistency in the provision of AI assurance services. This would be provided by the UK Accreditation Service (UKAS) and could provide AI assurance firms with a mark of credibility and quality.
It would use the UKAS CertCheck to help organisations find accredited certification bodies.
Again, this would be underpinned by global technical standards. This is also the primary negative as the standards are currently limited in number and scope.
Also covered in the roadmap:
Skills plan
The government plans to put together an industry consortium to help develop a skills and competencies framework for AI assurance.
This will help inform any future professional certification or registration scheme for AI assurance, considering existing standards, training courses and related certification schemes in cyber security, data science, software development and auditing. This includes developing pathways for existing professionals to enter the AI assurance industry. This skills need was discussed in a panel at the recent ICAEW AI Assurance Conference, where panellists explored the need for a diversity of roles in delivering AI assurance.
DSIT will use the framework to assess the availability of courses and qualifications for the sector, and invest further in AI assurance skills if necessary.
Access to information
DSIT outlined examples of the kinds of information that AI assurers will need access to, including boundaries of the AI system’s functionality and use, inputs and outputs, the algorithm used to generate the model and its parameters, oversight and change management mechanisms, and documentation that describes the management and governance processes surrounding the AI system.
As a solution, it suggested three example interventions, including: technical solutions to enable auditor access to AI systems; standards for information access and transparency, such as IEEE 7001:2021 and; best practice guidelines for information sharing.
Innovation
As AI tools are advancing at pace, the roadmap outlines plans to introduce £11m in funding to encourage development of innovative and novel AI assurance mechanisms to help manage the risks posed by high-capability AI systems. The first round of applications for the AI Assurance Innovation Fund will open in Spring 2026, and DSIT will also explore opportunities for the fund to support the work of the UK’s AI Adoption Hubs by funding pilots of innovative assurance solutions alongside cutting-edge AI technologies.
Commenting on the roadmap, ICAEW Head of Tech Policy Esther Mallowah says: “This roadmap speaks to many of the issues relating to the quality of AI assurance activities and delivers on the second action in DSIT’s Nov 2024 paper on Assuring a Responsible Future for AI.
“It is positive to see the focus on the independence, competence, and ethics of AI Assurance providers, considerations which align with the fundamental principles in our own code of ethics. It also speaks to how assurance is performed, which is equally important. Existing auditing and assurance standards such as the International Standard on Assurance Engagements (ISAE) 3000 can be applied to AI Assurance, although there is a debate to be had about the extent to which this is sufficient and what if any additional guidance and or standards are required.”
Real-world AI Insights
ICAEW's Annual Conference 2025 includes sessions covering how AI is already being used and how to address the challenges of implementation.