Why banks should prepare for the worse with machine learning
Oliver Wyman’s Elizabeth St-Onge and Ege Gurdeniz explain why banks need to prepare for the event robots going rogue when rolling out machine-learning applications.
Banks are rolling out machine-learning applications to handle all manner of tasks once reserved for humans. But are they ready to clean up the mess created if the robots go rogue?
While financial services firms have sturdy structures in place to police human misconduct, machine misconduct is another matter. Standing at the crossroads of compliance, risk management, human resources and technology, the management of machine conduct has no natural home in most banks’ organisational structures.
This needs to change if banks hope to tap the incredible potential of machine-learning applications. Used correctly, these technologies can deliver significant benefits both to banks and their customers. They can provide better customer insights and solutions, and more efficiencies across the entire firm, from customer interface to back office functions.
But banks also need to scrutinise the ethical ramifications of machine-learning applications just as aggressively as they vet the backgrounds, ethics and compatibility of job applicants. One bad bot can harm a bank’s reputation and potentially dent revenue. Since machine misconduct is purely a digital phenomenon, problems often spread instantly – causing chain reactions that can affect organisations and customers on a massive scale. Even technology giants have stumbled in the machine-learning arena.
So how can banks beef up their machine risk management while still fostering innovation and tapping into machine learning’s immense promise?
First, they need to create robust machine-development and data-governance standards for their machine-learning efforts. That starts with an inventory of all such applications running throughout the company. At many banks, individual teams roll out new applications in isolation. They need a firm-wide view.
Next, banks must dive headlong into the data. They already have a deep understanding of market and other data that flow in and out every day, but machine-learning applications are introducing vast quantities of new types of social media and customer-interface data that need to be catalogued and monitored. These new data forms require the same level of governance as trading and other financial data. Individuals or teams must be relentless in screening out anything that could bias a machine-learning application’s results.
Before a new application is introduced, it should go through a review and approval process that balances the need for proper risk management across the firm with the need to promote innovation. Each application has the potential to introduce new data and decisions into the ecosystem that could corrupt other functions.
Banks must also establish accountability for machine mishaps. Similarly, banks need a taxonomy for machine-learning applications that spells out the roles, responsibilities and procedures for governing and managing the risk associated with each type of machine that can potentially spiral if, say, a new machine were to start making inappropriate investment recommendations to customers.
To ensure banks can respond appropriately, they must boost the level of technological expertise inside each governance function, from risk management to compliance and human resources. Banks need to add data scientists and other technologists in these areas so that the right questions are being asked and the oversight is informed.
Finally, there are ethical considerations to the use of machine learning for decision making and senior management needs to be involved in developing that framework.
Machine-learning applications can enable banks to create value for their customers, employees, shareholders and society in new ways. But banks must be aware of the risks and address them quickly and systematically. Without proper governance, it won’t be long until a machine-learning disaster with major ethical, legal and financial consequences unfolds.
This article first appeared on the World Economic Forum’s website in Agenda.
About the authors
Elizabeth St-Onge, partner, financial services, and Ege Gurdeniz, principal, digital, technology, and analytics, Oliver Wyman