ICAEW.com works better with JavaScript enabled.

The necessary foundations for good AI assurance

Author: ICAEW Insights

Published: 03 May 2023

Artificial intelligence presents the audit and assurance community with many challenges in the fields of skills, standards and ethics, an ICAEW roundtable heard.

A fascinating debate over whether the UK requires general or sector-specific accreditation for artificial intelligence (AI) professionals surfaced in a recent ICAEW roundtable.

Exploring a range of topics around AI assurance – an emerging practice helping to determine the commercial safety and ethical integrity of AI tools – the event brought together experts from accounting and auditing, computing, digital ethics, the software development industry, data science, regulation and the legal profession, plus ICAEW itself.

Setting standards

New AI benchmarks would develop along two rails: foundational standards, setting out key definitions for the field; and process standards, relating to governance. Foundational standards must take precedence, according to an AI expert on the panel.

Questions of plurality also arose in this part of the discussion. For the AI ethics expert, the industry must examine the relationship between horizontal and vertical standards and ask itself to what extent it needs them to be aimed at specific sectors and use cases, or a more general approach. And in a post-Brexit twist, he stressed that the benchmarking regime contained in the EU AI Act would be very important for the UK, too. While debate on these shores has circled around whether AI standards should be developed from scratch or adopted, he said, there is a need for a concrete, dependable system – and the Act could provide a ready-made vehicle for that amount of clarity.

In many ways, attendees agreed, standards and skills are inextricably linked. A risk-advisory partner cited nervousness about a lack of clear standards to inform some of the tasks that will be increasingly required of the audit and assurance community as it engages more and more with AI. Although substantive AI testing is underway, the process of carrying out end-to-end assurance on the full lifecycle of an AI system’s usage, to determine whether the technology is resilient and bias-free, is not something that is currently happening.

A legal expert said that new, technical standards would be required to support such a level of assurance, because current standards in the AI field are not very technical in nature and focus mainly on outcomes. Guidance on how specialists should technically achieve those outcomes is lacking.

Assurance techniques

However, looming over issues around accreditation and standards is the critical point of trust. A senior figure from the analytics and data arm of a leading software provider stressed that if organisations that use AI, or provide AI-based services, are not viewed as being ethical, they are automatically at risk. As an example, she cited the lack of transparency around ChatGPT-4: people are beginning to realise that such algorithms have inbuilt biases and other issues, sparking concerns about potential risks.

The same speaker noted that her business issues a regular report on the risks associated with usage of its AI – and, as stakeholders want plenty of detail, the report now includes the results of tests performed on the software’s algorithms and the relevant outcomes. From her company’s perspective, AI auditing is progressing through audits of those algorithms. There are currently 64 on the radar of the company’s auditing regime – but if their number rises, that will become unmanageable. Further assurance on the company’s AI comes from internal controls around how it is being used.

Another Big Four senior manager, with expertise in AI and machine learning, pointed out that from a financial services perspective, banks are currently using thousands of different AI models. When determining risk, their key measures are the impacts on profit and loss and cash. If a model is found to have a significant impact on either of these metrics, then the bank will typically perform additional tests.

A major issue here, he said, is that the AI landscape is moving at such a fast pace: adoption and implementation of systems has been a lot quicker than many insiders expected. That presents the audit and assurance community with a real challenge in terms of scope.

The software senior figure mentioned that the rate of AI adoption among consumers is also very surprising – the issue being that they are often unable to evaluate it fairly because they don’t know enough about how AI is trained, how it works and what the relevant risks are. There is an expectation gap when it comes to consumers’ interpretation of assurance, so there is a need to educate the public as well as board members. 

Picking up on that point, a computing expert noted that bringing consumers up to speed on some of the issues is important – and third-party endorsement via a certification scheme may help to drive that education and evaluation.

The challenge of responsibility

Attendees considered whose responsibility it should be to develop certification for AI specialists – a debate that is yet to reach a clear conclusion. Existing professional bodies – for example, in accountancy – acknowledge that their members are already working with elements of AI. So should those bodies adapt their certificates to include AI, or build entirely new qualification paths?

A digital ethics expert said that, before AI came along, IT and cybersecurity took time to professionalise with their own accreditation schemes. There is potential to learn from other professions, such as cyber security and the medical field. The Centre for Data Ethics and Innovation has an ongoing project to capture lessons learned from other professions and will be publishing a blog post on the topic in the near future. The question is: what are the signals that tell a specialist field it’s the right time to professionalise – and who makes the call? And will the skills and certifications be the same from one sector to the next?

For one risk advisory partner at a major accounting and auditing firm, a plethora of pathways would risk diluting the certification process. For example, there are lots of different options available in cyber security – but how does anyone gauge which one is better than another? To combat that threat of dilution, an AI ethics expert said, a more centralised approach is required.

There was consensus that sector-agnostic certification may be the way to go, with sectoral specialisation gained through experience.

The discussion around professionalisation of AI assurance is ongoing, and ICAEW’s tech team will continue to explore this topic in future sessions. To contribute your ideas or to get involved, contact esther.mallowah@icaew.com.

Discover more from ICAEW Insights

Insights showcases news, opinion, analysis, interviews and features on the profession with a focus on the key issues affecting accountancy and the world of business.

Podcasts
Podcast icon
Insights Podcast

Hear a panel of guests dissect the latest headlines and provide expert analysis on the top stories from across the world of business, finance and accountancy.

Find out more
Daily summaries
Three yellow pins planted into a surface in a row
News in brief

Read ICAEW's daily summary of accountancy news from across the mainstream media and broader financing sector.

See more
Newsletter
A megaphone
Stay up to date

You can receive email update from ICAEW insights either daily, weekly or monthly, subscribe to whichever works for you.

Sign up