The Post Office Horizon scandal highlights how technology factors into critical decision-making. Technological advances such as generative AI have made this a central ethical consideration for leaders, accountants and finance teams.
James Hartley, Partner and Head of Dispute Resolution at law firm Freeths, represented the 555 postmasters in the Post Office case. “Bugs and errors were disregarded in favour of putting a positive spin on an expensive investment that directors desperately wanted to work,” he says. “The glitches in the Horizon system were of course a big part of the problem, but the greater failings came from decisions taken by the Post Office, which profoundly impacted people.”
More scepticism
Computers are only as good as the person who created the algorithm, but that is often forgotten, says Ian Pay, Head of Data Analytics and Tech, ICAEW. “The Horizon case is a timely reminder, with all the current talk of AI ‘hallucinations’, that computers have a long history of getting it wrong. We tend to implicitly trust the output of computers, forgetting that whether it’s AI or a very simple computer programme, it is ultimately an algorithm written by a person – and they might not have got it right.”
Pay isn’t suggesting we don’t trust computers, but rather that we apply the basic principle of scepticism. “We have to actively counter automation bias – the natural human tendency to trust outputs from automated systems. Fortunately, accountants are well trained to be alert to such risks.”
Simon Hurst, ICAEW member and contributor to its Excel community, says that one of the main things he learned in his first job outside of accountancy in software development was the impossibility of guaranteeing that any release of software would be entirely bug-free. “There were so many interrelated variables involved that testing every possibility in any sort of commercially viable timescale would have been impossible,” he says.
One of the lessons to be learned and applied is education, or rather a re-emphasis of scepticism in wider society. For accountants, professional scepticism and an alertness to the risk of bias is an innate part of accountancy training already, but perhaps it should also be on the wider educational curriculum.
“I think wider society still has more of a struggle with scepticism,” Pay says. “There is a lot to be said from the education perspective, encouraging people from a young age to question and challenge sources, regardless of your familiarity and comfort with technology. It is incumbent on everyone to be comfortable asking questions, and asking ‘how do you know that’s the right answer?’”
Data oversight
Governance processes and audit trails are vital to ensuring oversight of data. Again, this will become more important as organisations and their workforces use AI more.
Caroline Tuck, partner at law firm RPC, says an audit trail to investigate the “individual discrepancies, highlight unusual trends and flag inconsistent findings” can provide an early indication when there is a problem with the software system.
Better training on the use of the Horizon software and proper assistance from the help desk could have ensured that “systemic complaints about software bugs were taken seriously and resolved promptly, and may have avoided wrongful prosecutions”, she says.
Anne Kiem OBE, CEO of the Chartered Institute of Internal Auditors, emphasises the need for
strong corporate governance. “Senior management and boards could and should be leveraging the skills and expertise of their internal audit functions to review all major projects. This includes assessing the associated risks, governance controls and ongoing effectiveness around key IT projects, providing independent and objective assurance to senior management.”
She adds that for internal audit to operate optimally, auditors must have the authority to access all parts of the organisation and have a direct reporting line to the audit committee. “We want to rely on technology to save us from the laborious tasks at work, so critically it is about controls and governance.”
Hurst says: “Perhaps the Horizon inquiry will consider whether such a reluctance to engage effectively with technology played a part in shifting the burden of proof from those defending their computer systems to those defending their livelihoods.”
With the growth of generative AI, this issue becomes even more significant. It may well be unreasonable to expect those responsible for systems and processes that involve technology to know all the right answers, but we should all at least know enough to ask the right questions.
Ethical considerations
The International Ethics Standards Board for Accountants (IESBA) Handbook for Professional Accountants defines five fundamental principles around integrity, objectivity, professional competence and due care, confidentiality and professional behaviour, which apply to the use of technology.
In practical terms, this means being transparent and honest about how you use technology, particularly when used to support research or analysis that is delivered to a client. Avoid overreliance on technology, which could create automation bias. Use professional judgement and knowledge to review the outputs of the technology you’re using.
Be wary of false information that might be presented by a piece of software and use your professional judgement to ensure that outputs are accurate. Finally, be aware of data sources and limitations, data protection laws, the benefits and harms that individuals might face when using technology, and transparency and openness around decision-making.
For more information visit the ICAEW Corporate Governance Conference 2024 hub.