ICAEW.com works better with JavaScript enabled.

Cover story: Learning to trust in artificial intelligence

As ICAEW launches a new ethics and tech resources hub, its experts talk about the ramifications of using artificial intelligence in business and finance – exploring evolving responsibility, accountability, ethics and more.

Public discourse around artificial intelligence (AI) has tended to prey on emotion: instilling Acting on intelligencepanic about a robot takeover in the workplace; sowing doubt about trusting self-driving cars and so on. 

But for those working in fields that have already started to embed AI, such as accountancy, fears are more specific than generalised. While generating huge volumes of insight that might benefit businesses, algorithms being developed to crunch enormous data sets are so sophisticated they verge on the opaque. 

Accountants, driven by professional scepticism, ask: if they cannot understand these systems, should they be trusting the outputs? As the US Defense Advanced Research Projects Agency (Darpa) puts it: “Continued advances promise to produce autonomous systems that will perceive, learn, decide and act on their own. However, the effectiveness of these systems is limited by the machines’ current inability to explain their decisions and actions to human users.” Furthermore, the fact that algorithms can create outputs that accountants would find biased means the ethical obligations of the profession are being tested on several fronts.

To address issues and dispel the fears emerging around AI, ICAEW has created a new ethics and tech hub. The site brings together expertise from the ethics team and the IT Faculty, as well as the Financial Services and Audit & Assurance Faculties, where data analytics driven by AI is more advanced than in other sectors. For example, telematics have been used to offer cheaper insurance to people in otherwise high-risk categories in return for having their driving performance tracked. 

But there have been examples of algorithms making decisions that led to unintended consequences (eg, accusations of racism) because the algorithm acted on information contained in historic data sets.  

Evolving responsibility

IT Faculty technical manager Kirstin Gillon says the hub’s mix of know-how, practical guidance and knowhow will hopefully help members be more comfortable with AI over time. Points raised during ICAEW’s Ethics Standards Committee meetings and at topical roundtables are also feeding into faculty work and the hub. Contributions come from people in business, as well as big and small firms. Gillon says that accountants’ questions are most often framed around AI’s impact on society and the wish to do public good: “They have concerns about surveillance and the impact on privacy.” 

In effect, practitioners want to know how adoption of AI will affect their ability to stay true to the five core principles of their ethical code: integrity, objectivity, professional competence and due care, confidentiality and professional behaviour. Last autumn, panellists at the World Congress of Accountants (WCOA) debated whether the ethical code needed updating to reflect recent technological advancements. 

Ethical conversations are also happening between companies developing AI-enabled systems. The Partnership on AI consortium includes Amazon, Apple, Facebook, Google, IBM and Microsoft. Open AI, a non-profit for research sharing, was co-founded by Tesla’s Elon Musk. DeepMind, a UK-based pioneer of neural networks bought by Google in 2014, has an ethics and society arm peopled by academics from Oxbridge and Cornell. It states: “New technologies can be disruptive, with uneven and hard-to-predict implications for different affected groups. We have a responsibility to support open research and investigation into the wider impacts of our work.”

“There are plenty of forums for discussion, particularly in the UK where there is a lot of research going on,” Gillon agrees. “But because the tech firms are the ones leading on innovation, the debates are heavily driven by their sector. Maybe accountants should have a stronger voice, given our ethical focus and experience.”

She says the ethical code doesn’t stand alone: “It’s embedded in our training and disciplinary systems.” 

Accountability

Another reason for accountants to be involved in framing the AI ethical debate is their level of accountability, particularly where so-called black box systems are being adopted. Gillon asks: “How do you make sure you’re making decisions that are morally correct and error-free, as well as put right mistakes?” Participants in an ICAEW ethics roundtable in May 2018 said: “We are in the spotlight every time a system is to blame.” The roundtable suggested the profession would need to be involved in creating assurance frameworks that determine “whether firms/systems are operating in accordance with ethical principles”. 

ICAEW integrity and law manager Sophie Falcon says: “There has been some high-level discussion at the Ethics Standards Committee about how the code of ethics would apply if you have intelligent machines as part of your workforce. One strand of thought is that they could be considered similar to staff. If you have this kind of ‘being’ doing work for you, is it analogous to when you’re training it and still responsible for reviewing what it produces, and the buck stops with you? Could you look at where the code of ethics refers to ‘member’ and change it to say ‘member or machine’?” 

Machines cannot fear being made redundant in the event of causing a particularly bad mistake, but they can be programmed to have particular reactions and learn from them. Falcon says: “Part of professional ethics is that there are consequences if you breach them and you can be disciplined. 

“In terms of the general principles, you need to act fairly and you need to be honest and truthful. You need to do a good job and keep things confidential. There is no reason why you couldn’t specify those parameters for tasks you were getting a machine to do.”

In theory…

The WCOA panel discussion indicated that adding AI-specific elements to ethical accounting codes is still at an early, theoretical stage. But Falcon agreed the profession would not be able to “absolve itself of the responsibility” of a machine’s actions. It could become a real challenge for accountants who want to learn the ins and outs of AI algorithms before they will trust them. 

Gillon agrees: “It comes down to a trade-off between accuracy and understandability. There will be times when accountants don’t need to understand the AI. And there’ll be other times when you really do need to understand how the program has come to this recommendation that you’re going to rely upon.” 

Falcon and Gillon both refer to explainable AI (XAI), which tech specialist David Gunning of Darpa says will “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy)” and “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. 

Such machines, “that understand the context and environment in which they operate, and build underlying explanatory models that allow them to characterise real-world phenomena”, are expected to be realised in what Gunning calls third-wave AI systems.

Adapting the XAI concept for accounting, Falcon says: “You wouldn’t be checking the technology: you’d effectively check its thought process and whether this was in line with the principles you required.” 

Ethically speaking

Those looking to regulate or at least advise accountancy on ethics in future will surely examine how things have played out so far in financial services. 

“Technology can help design and distribute better products, and widen access to financial services on sustainable terms by giving a better view of risk,” says Philippa Kelly, head of ICAEW’s Financial Services Faculty. Tech firms have long been disrupting traditional banking and insurance providers by meeting demand for cheaper, targeted products through apps and other platforms that rely on customers providing personal data. But the AI that helps to deliver these improvements has been found wanting enough to warrant greater oversight. 

Kelly adds that the Senior Managers’ Regime (which applies to banks and insurers, and will apply across the whole of the financial services industry from December 2019) emphasises the need for boards to get a handle on where big data and AI are being used. “They need to be responsible for the outcomes the increased use of technology delivers, and they might not presently understand what those are,” she says.

There are numerous ethical challenges to face in the financial services sector, for example around offers of credit. Card companies will receive a swipe fee in addition to interest charged on purchases, which means it is in the company’s interest for customers to rack up transactions. Reward credit cards trade in a similar fashion, giving bonus rewards or discounts if the spend adds up to a certain monthly level. Kelly notes: “These inducements will likely be offered to customers following an analysis of past card use to figure out where and when you’re most likely to spend more. But the ethics of encouraging higher spending are questionable when one in six borrowers is in financial distress.” 

Another dilemma, first outlined in the Financial Services Faculty publication Audit insights: Insurance in 2015, concerns “the potential to undermine the concept of pooled risk in insurance”. A turn towards individualised policies might leave certain people uninsurable. The potential for change is huge in health and life insurance. Kelly says: “If you think about the proliferation of data that people are willingly sharing and generating – from genetic testing (23&Me and other family tree services) to daily heart rate patterns (FitBit and Apple Watch wearers) – it’s likely that these developments would further distance those who could most benefit from it from being able to access health and life cover.” 

The double bind

Financial services will definitely benefit as XAI systems come to the fore. Investment managers have employed the same sceptical and cautious mindset as accountants before applying AI models to their portfolios. Kelly says: “One leading investment manager found that an AI liquidity risk model was significantly outperforming traditional methods. However, the type of AI used, neural networks, meant that the reason for the outperformance couldn’t be explained. 

“This meant the model couldn’t be used, as there would have been a lack of effective governance if senior managers weren’t comfortable using AI that couldn’t be explained.”   

This outlines a secondary dilemma. Philippa adds: “By not taking the action that makes the best return on their investments, they’re not doing the right thing. But their duty of care also means that if they were to use the technology that couldn’t be explained, even if it got a better result, they also wouldn’t be doing the right thing.”

This double bind is the kind of AI problem that ICAEW’s AuditFutures programme is concerned with. AuditFutures hosted a discussion with the University of Edinburgh in November 2018 to consider the challenges and opportunities arising from the development and use of AI systems. 

Findings from the two bodies’ ongoing collaboration indicate that there is a low tolerance of failure from AI systems, which are expected to make “better than human judgements”. The majority of automation has so far occurred at entry level, and AI is not yet able to take over from humans in areas where wisdom, experience, professional judgement, selectivity, instinct and general knowledge must be applied. 

But Martin Martinoff, AuditFutures programme manager, argues that it’s important to remember that algorithms are more than just lines of code: “They are a powerful means of social control, and their social impact can limit our decisions, signal certainty in uncertain conditions, push us towards actions we would not otherwise have taken, and limit our access to broader information.” These realisations drive calls for greater transparency.

Black box technology in AI protects proprietary information in highly competitive markets, but also demonstrates that transparency may be a one-way street. Martinoff says companies are using a range of technologies such as facial and voice recognition, and textual analysis to enable targeted advertising, but this depends on gathering otherwise private information that can even impinge on people’s protected characteristics – such as the state of our mental health, sexual orientation, religious beliefs and genetics. 

As the adage goes, if you’re not paying for the product, you are the product. This becomes a further problem when the data collected for a given purpose “reflects and exacerbates structural biases or introduces new ones”. Martinoff says this can lead to the “encoding” of discrimination and particular sets of values within algorithms, which surface as prejudice – and in financial settings these lead to uncomfortable outcomes in areas such as credit scoring. 

Getting ahead

The Financial Services Faculty will lead on the publication of a series of thought leadership papers about ethics and AI in 2019. The first instalment of Ethical use of big data in financial services will look in more detail at scenarios where big data has presented ethical dilemmas, as well as share principles for financial services firms and their boards, and information for consumers. The IT Faculty is also working on a paper, and expects to develop a webinar in due course. 

The Ethics Standards Committee will continue to feed in to the International Ethics Standards Board for Accountants, along with ICAEW staff who met with the Board in January, in anticipation of any long-term project to address ethical updates to the code. 

Accountants won’t be the only professionals grappling with the philosophical debates around AI as its use continues to expand. But by starting the hub now, while discussions about updating ethical codes are still young, it means accountants will have the necessary means to prepare.