The risks of AI and how to mitigate them
The benefits of AI in accountancy are clear – from intuitive cyber security to help with menial tasks. Could these be outweighed by the threats it poses? Sam Shaw finds out
However, there are two key areas where the use of AI in cyber security poses fresh challenges, according to Richard Anning, Head of the Tech Faculty at ICAEW. The first is around the vast increase in the level of resources required, in terms of personnel with relevant expertise but also the requisite training data. In order for AI algorithms to learn effectively, they need access to lots of existing malware code which, Anning says, presents a large drawback given the financial and operational investment necessary.
The other issue is that just as accountancy firms and their clients are preparing for greater adoption of AI in cyber security, so too are cyber criminals. Anning notes: “While companies are using AI to help defend themselves against cyberattacks, the cyber criminals are using AI to get around AI defences.”
In the same way traditional cybersecurity threats have witnessed a widening gap in favour of the criminals as they become one step ahead, that gap will inevitably widen further and faster with the rising use of AI. The reasons are rooted in the very nature of AI.
As AI uses data, keeping it safe should be a priority. “The key risks when using AI or machine learning tools in the deal environment are around data security,” says Stephen Bates, partner at KPMG. The tools capture sensitive data that may fall under various privacy laws across multiple borders, which then have to be carefully managed by the provider, the AI company and the accounting firm.
“Accountancy firms need to properly understand the data capture and storage process, the initial and ongoing use of the data, and ownership and protection of that data – especially that it meets legal and regulatory requirements,” Bates says.
With M&A data, hackers might be looking to benefit from investment decisions based on intelligence about a looming deal, for example, while in the private client tax division, exposing financial details of any high-net-worth-individuals is often of interest to the media. For accountants handling the victims, the reputational damage from a breach may turn out to be irreparable.
For a profession so attuned to rigour and validation, order and process, having to accept the black box opacity aspect of AI presents some (potentially legal) challenges, because AI’s scope for achieving outcomes is enormous.
Yet, given that AI does not produce the data – which may be cloud-based – technically it should not introduce any new risk, Cox explains. But if an AI tool was used that clones the data, that brings a whole new arena of possible risks, he adds. However: “Most vendors will use platforms such as Amazon Web Services, Microsoft Azure or IBM, which put security at the heart of everything that they do. Data is not going to be a major issue,” he says.
In using algorithms designed to learn and evolve, one issue around AI lies in one’s internal risk management, according to Mazars. Asam Malik, Mazars’ UK Head of Technology Consulting and Assurance, believes a firm’s ability to audit its AI poses a governance risk: “AI learns and evolves based on patterns, it brings a more subjective nature, which is where the nervousness comes in.” Yet Malik caveats this by saying: “With any new technology, there will always be nervousness around how we manage, audit and understand it. But you develop techniques to manage that risk.”
Securing security expertise
Accountants are not asking why AI should be used, but why it shouldn’t. Clients are getting younger and more digitally focused, and expect a technology- first approach. AI can also be positioned as a source of competitive advantage.
However, one challenge is where cyber security professionals sit within the firm – in specific business units or centrally for all departments to draw upon their expertise?
“You may need to get those professionals closer to the people using the data. If you’re sitting in a separate IT function, you will be slightly removed from your average graduate consultant using that data, and it could be harder to spot the risks,” Shave says.
We want to encourage wider debate about the long-term opportunities and challenges for the profession that AI poses. You might also like to read our other related articles: