ICAEW.com works better with JavaScript enabled.

Making sense of AI’s risks and rewards

18 June 2020: stronger internal controls will be vital to counteract the risks of cognitive intelligence technologies like machine learning and automation, ICAEW's Tech Faculty has said in a new report.

Algorithms, robotic processes, machine learning, natural language processing and natural language generation are increasingly being adopted in business, according to the landmark study,  Risks and assurance of emerging technologies.

However, the exceptional speed of development, attributed to unprecedented levels of investment and research, has magnified the liabilities rooted in newer technologies.

Before such tools can be used to full effect in healthcare, education, airports and law enforcement, for example, they will require companies to update and add to controls. This could include enhancing levels of human oversight and adding regular review processes and better understand the capabilities of what they are handling.

"Many larger organisations struggle to stay on top of what cognitive projects they have going on – particularly as it becomes easier for small teams to do it themselves – and that can also mean that getting a consistent set of standards for development, control, and monitoring into place is horribly difficult," said the report's author David Lyford‑Smith, ICAEW technical manager. "And that has implications for accountability, too – if a rogue project goes wrong, is it the rogue developer that's more at fault, or the chief technology officer that didn't put in place a system to discover and control that project?"

Accountants and professional services firms must pay particular attention to sensitive issues of bias and data protection that can arise, said Lyford‑Smith, and "consider the impact of omissions, errors and biases encoded in that data early in the process".

How to train your algorithm

The exponential rise in volume and complexity of data that exists is fuelling the maturity of cognitive technology, which continuously learns from live data sets.

However, ensuring an algorithm doesn't build on its incorrect knowledge of how to execute a task will be crucial to its future success. A machine does not follow human ethics when making a decision and could be manipulated to act in such a way by fraudsters.

The importance of accurate data fed into software and creating explainable models that are not "black boxes" whereby an output such as a sales rejection cannot be explained are of central importance, the report found. As robots learn differently to humans, inexplicable outputs can occur.

It is vital to put in place preventative and precautionary measures to ensure the technology is understood and working correctly, according to the report.

The report also explores more general concerns around widespread adoption of automation software, addressing the belief in popular culture that robots will put generations of people out of work.

Replacing staff with cheaper automated technology may seem appealing, but the business would also lose the experience and knowledge that come with human employees, the report found.

Future shocks

Accounting and financial services regulators take the approach of "regulating the output over the process". However, given the potential for disruption, that could change.

"I think the cognitive technology approaches aren't just new ways of doing old tasks but have the potential to be completely new ones. The saying 'A difference in amount becomes a difference in kind' is one I like in this context," said Lyford-Smith.

While cognitive technology is currently the largest area of interest for many businesses, several other emerging technologies are making their impact felt, or are poised to increase in importance in the coming years, the report found.

As such, the approach taken in the report ensures the issues and responses to cognitive technology can be applied to other emerging technologies, even those not yet invented.

"It is interesting how relatively few organisations have made an entryway into the assurance of AI/technology market as of yet," said Lyford-Smith. 'While it has some significant technological and technical barriers to overcome, there's definitely a need for assurance in this area as the risks are still relatively under-considered."

He said the research uncovered those who are active in the cognitive technology space said they often had to make the case to AI users why the risks existed and might warrant assurance.

"I think organisations have been slow to realise that machine learning and cognitive technology are riskier than they might think," he said.