Tomorrow's auditor thinks slow
18 September: Can auditors learn to make the call of whether or not to trust outcomes presented by a machine, a human or a mixture of both? Jan Bouwens, Professor of Accounting at the Amsterdam Business School, argues that in order to do so, they must learn to think analytically rather than intuitively.
Tomorrow's auditors must be able to rationally and impartially assess the extent to which they can rely on the outcomes of algorithms compared with those produced by people or traditional systems – or a mix of human and machine outcomes. To achieve this objective, auditors must be able and willing to let go of their intuition to make room for thorough analysis.
In 2019, Jenni Kallunki and her colleagues showed that an auditor's IQ determines the quality of the audit. As demonstrated by the study, the smart accountant makes fewer mistakes in determining the future of the company (going concern) than an audit partner with a lower IQ. The data used in the research goes back to 2000-2009 so perhaps with an increased reliance on technology, IQ should play a lesser role in audit now? The answer is a resounding no.
Auditors are accustomed to using systems that help improve the productivity and quality of audits. One of the techniques increasingly used by auditors falls under the category of artificial intelligence, where algorithmic systems learn from the past and consider additional information relevant to the audit. The question is: can auditors make the call of whether or not to trust outcomes presented by a machine, a human, or a mixture of both?
Recent research by Berkeley Dietvorst and colleagues shows that people are inclined to apply a double standard. If the same mistake is made by a person and a system, the chances are that the error made by the system will lead the decision-maker to discount the outcomes of the system to a far higher extent than that of the human.
When they see human errors, people will opt for the results of a system but as soon as they observe machines making mistakes, the chance of decision-makers relying on the results of machines turns lower to a disproportionate extent, while at the same time, they put more faith in the results that people produce than they should.
When the decision-maker observes human error in addition to that of the machine, the machine is charged to a higher extent than the human making the same errors. People, therefore, reject system outcomes to an extent that is disproportionate to the magnitude of the error.
The opposite happens when humans can make a change to the machine's prediction. In that case, the decision-maker will rely more than proportionally on the results of the system.
Both outcomes are undesirable, as we want the decision-maker - our auditor - to have no more and no less faith in the outcomes of the systems than the accuracy of the results warrant.
To move to this situation, we need auditors who can deactivate their intuition to subject the results to a thorough analysis. What does the given outcome tell me and does the outcome fit the underlying situation? In other words, we are looking for auditors with an analytical mindset.
We know from research by the MIT economist Shane Frederick that analytical decision-makers are better at making the right decision in this regard. In his recent study, Anthony Bucaro found that senior auditors forced to think analytically consider a more comprehensive information set in their judgments than auditors who make the judgement intuitively. With the vanguard of machine learning, there is no escape; auditors have to switch on their analytical system at the cost of their intuition.
In the words of Nobel prize winner Daniel Kahneman, tomorrow’s auditors must learn to think slow rather than fast!
Jan Bouwens is Professor of Accounting at the Amsterdam Business School of the University of Amsterdam, a research fellow at the Judge Business School of the University of Cambridge and Managing Director of the Foundation for Auditing Research, The Netherlands.