What could sentiment analysis offer accountancy?
7 February 2020: the ability to scan and process vast amounts of employee data offers a new approach to risk mitigation and can help combat fraud, but as with any technology that can be applied to humans there is a fine line to tread, writes Mark Taylor.
It can evoke the invasive omnipresence of George Orwell’s Big Brother, but intelligent communications monitoring or "sentiment analysis" is becoming an increasingly popular tool to measure employee sentiment and tackle fraud in the accounting and audit worlds.
Sentiment analysis can be simply defined as the process of evaluating pieces of text to determine their emotional tone. Scanning software can, theoretically, tell if the person writing the email, social post or other message has a positive, negative, or neutral attitude towards the topic about which they are writing.
Unique approach to risk mitigation
Proponents of such monitoring tools say the machines are not really "watching" you. They are merely processing information to generate alerts that a human will then investigate once a trigger is hit, potentially stopping a problem before it develops.
Other AI solutions being developed using sentiment analysis include bankruptcy prediction algorithms, which feed in textual communications of employees along with balance sheet calculations, to scan for early indicators of financial problems that may get out of hand.
The ability to analyse text from employee communications offers a unique approach to risk mitigation and can help de-escalate problems. It may also give a strong indicator of a business’s financial condition.
In recent years a healthy workplace and the wellness of employees have increasingly become boardroom priorities. Combative management styles are being phased out, and software that ranks morale or mood could feasibly help firms retain staff and help with burnout.
Privacy concerns still need to be addressed
While technologists continue to make the case for such tools, the issue for many accountants is one of trust. While a forensic audit or investigation into a misdemeanour would benefit significantly from tools that can retrospectively piece together the events that led to potential wrongdoing, privacy concerns still need to be addressed.
Several AI solutions on the market today promise to scan emails, texts, voice calls, Slack, G-chat or other communications tools used by employees on work devices and use machine learning algorithms to crunch that data into actionable insights.
One such vendor, Behavox, trains its machines using an enormous range of datasets, including the millions of emails and telephone calls that were made public following the Enron collapse of 2001.
The software is programmed to look for behavioural deviances that match predetermined outcomes depending on what the client wants to look for. It can spot instances of potential fraud that stem from fraud password sharing, drunkenness or whispering on phone calls.
“When the lens is turned internally, sentiment towards corporate change can be analysed and focused around significant announcements made by management”, says Behavox Head of Regulatory Intelligence, Alex Viall. “This form of analysis is most effective at signalling really unusual behaviour and can indicate employees who are being highly disruptive, potentially colluding with competitors, stealing IP or under extreme stress and becoming a mental health risk.”
The machines cannot do it alone, Viall says, and the output signals require a layer of human analysis and interpretation to be of value; and then there needs to be a process to act on the insights gleaned.
Privacy vs applications: a fine line to tread
As with any technology that can be applied to humans, however, there is a fine line to tread, especially where privacy is concerned.
“This kind of analysis is not new and has some interesting potential applications for accountants, but it raises two key questions: accuracy and ethics, especially when looking at employee emails”, says Kirstin Gillon, Technical Manager, Tech Faculty at ICAEW.
“Is it actually good enough to tell you anything useful, and how might employees feel about their communications being analysed in this way? Any members looking at these kinds of tools should think about these questions and make sure that they are using the tech in an effective and responsible way.”