ICAEW.com works better with JavaScript enabled.

AI takeover, part 2: ethical and regulatory implications for financial services

Author: ICAEW Insights

Published: 13 Jun 2023

Polly Tsang, Financial Services Manager at the ICAEW navigates AI's potential to reshape financial services in a two-part series. In the second part, she questions the ethics and regulation of the use of AI in financial services.

Data privacy and consent – can customers say “no”? 

Linked to regulation are questions around data privacy, security and consent – what measures should be taken to obtain informed consent and ensure customers understand how their data is used in AI-driven financial services.  

For example, AI-based insurance risk assessment may use dynamic pricing based on answers given by the customer around their health. The customer volunteers details about their health and lifestyle and is then rewarded with a lower premium. What if the same AI technology then utilizes the same data in assessing the viability of providing a loan or the eligibility of certain credit cards?  

It’s not new for companies to try to to use data for a different purpose to which it was collected for (despite the legal restrictions in many countries) – think of all the marketing you receive as a result of signing up to win a free trip to Puerto Rico at a trade show. However, AI could further muddy the waters of data protection and privacy. The information used by the AI learning mechanisms could be inextricable from the model itself – it’s not as simple as removing your name from a database.   

There are a number of moral and ethical considerations to be worked through before the widespread implementation of AI to make financial decisions.  

Credit scoring systems that incorporate AI and big analytics to assess creditworthiness based on factors such as financial behaviour, online purchases, and social connections, providing credit scores to individuals without traditional credit histories are already in use in parts of the world.  

On the one hand, it can enable people who are not engaged with traditional banking systems in LEDC countries to much wider access to finance. On the other hand, how appropriate is it to incorporate ‘social credit’ scores in creditworthiness assessments – who arbitrates what is good versus bad social behaviour, given its very real impact on the financial livelihoods of people? At what point does big data being used in Financial Services AI become an Orwellian panoptic quandary?  

Financial institutions will need to find a way to safeguard customer data and prevent unauthorized access or breaches when using AI technologies, particularly when the data held becomes so all-encompassing and is not just limited to financial information.  

Transparency versus black box dichotomy 

Transparency in AI, which refers to the ability to look into an AI model to understand how it reaches decisions, will be key in this respect, to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems.  

Transparency can enable customers to understand, and where appropriate, challenge the basis of particular outcomes. For example, allowing a customer to challenge an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information.  

However, transparency will naturally need to be balanced with proprietary intellectual property when discussing valuable technology that helps drive the profit centre at a bank. How can we ensure fairness, transparency, and accountability in AI-driven financial services? 

The issue we currently face is that no one really understands exactly how decisions are arrived at – for example, the founders of ChatGPT cannot explain exactly how the technology arrives at a given answer. The current black box paradigm which many LLMs operate may make it difficult for companies to identify and avoid potentially factually incorrect information used for assessments.  

AI systems in Financial Services offer many opportunities, but also raise real concerns about bias, privacy and moral considerations. As the technology advances and becomes more integrated with financial systems, new systemic risks may become apparent. The key is to balance the promise of AI with an assessment of the risks and for financial institutions and regulators be ready to adapt.  

Read: AI takeover, part 1: risks and opportunities for financial services

Join the Financial Services Faculty

Gain sector-specific technical support and expert opinions in banking, insurance, and investment management to keep you up to date in a fast-changing environment.

Faculties

Discover more from ICAEW Insights

Insights showcases news, opinion, analysis, interviews and features on the profession with a focus on the key issues affecting accountancy and the world of business.

Podcasts
Podcast icon
Insights Podcast

Hear a panel of guests dissect the latest headlines and provide expert analysis on the top stories from across the world of business, finance and accountancy.

Find out more
Daily summaries
Three yellow pins planted into a surface in a row
News in brief

Read ICAEW's daily summary of accountancy news from across the mainstream media and broader financing sector.

See more
Newsletter
A megaphone
Stay up to date

You can receive email update from ICAEW insights either daily, weekly or monthly, subscribe to whichever works for you.

Sign up