ChatGPT and other large language models (LLMs) have been dominating the headlines for months. From reports of it passing the New York bar to a story about how it attempted to convince a journalist to leave their partner for the AI platform, there has been lots said about generative AI both good and bad. But what are the practical applications in the financial services sector?
The future is already here: AI is everywhere
Whether you realise it or not, most people are already interacting with AI in their financial products – using your face to unlock your banking app? Yep, that’s AI. Using a chatbot for support on banking services on a bank’s website? Also, most likely AI.
AI is already being used for risk management purposes, in fraud detection and prevention (unsurprisingly, AI is also being used to manufacture increasingly sophisticated phishing scams). Process-automation - where AI-identified decisions are implemented through robotic process automation - gives the ability to automate tedious tasks such as budgeting and forecasting, freeing up employees to focus on other areas.
AI-driven data analytics using natural language processing algorithms have been in law firms and banks for years to automate the extraction and analysis of data from court and customer documents, reducing manual effort and improving efficiency, much to the relief of paralegals and clerks around the world.
Robo-investing and virtual assistants have in a way democratised personalised banking, meaning it is no longer the remit of the wealthy. Venture capital, PE funds and accountancy firms are already using the latest AI to sift through thousands of companies’ financials and sell side data to compare companies and identify acquisition targets and start-ups for investments.
In the insurance sector, AI is being used in damage assessments to analyse images and videos of damaged properties or vehicles to assess the extent of the damage and provide an estimate for repair or replacement, reducing the need for physical inspections and expediting the claims process.
AI will increasingly be implemented in credit decisions, deciding whether one gets a loan or not based on credit scoring, and as a result the lending advice given out.
This use of AI allows a bank to provide a much more personalised service to customers, tailoring their financial products to match the needs of their products and potentially offering beneficial rates to customers seen as ‘low risk’ or ‘lucrative’.
Price discrimination to the nth degree – what is fair?
The counter to this is hyper personalisation of banking products may lead to the marginalisation and exclusion of certain demographics who are not seen as being ‘profitable’, potentially exacerbating the issue of people being unbanked.
Firms price discriminate all the time and in banking there is an argument that if you pose a higher credit risk, then a bank should be able to charge you more to compensate for that risk. Hyper personalisation takes this one step further. Instead of pricing based on a group of similar characteristics, AI will be able to zone in on you specifically.
Banking for the most part has been inclusive because market incentives have supported inclusivity, not because of a moral imperative. However, this could change with the use of AI, as AI will be able to tell what the potential value of any given product or service is to you as an individual and price accordingly.
Banking products are typically marketed in the broadest sense and in the past the economics of banking has supported this. Up until now, we have not had the capability to identify who is likely to take up the most products, whose behaviours are such that they will generate additional revenues, or who is likely to drive higher costs (either because they are require greater customer service or because they are bad credit risk).
AI can predict over the course of your life whether you are likely to be profitable or not, which products you are likely to consume and whether you will generate costs for a bank. As a result of the dawn of AI in FS, are we likely to see lots of competition between banks for profitable customer groups and general neglect for those who are not?
Ethical hazards and new systemic risks
This leads us on to another issue – what happens if the data from which the AI decision is made is wrong or incomplete? There’s been a lot of talk about implicit biases in the data used to train AI models, which may manifest in discriminatory decision-making process in credit scoring, loan approvals and other financial decisions.
If AI algorithms are trained on historical data that reflects biased lending decisions or systemic disparities, AI may perpetuate those biases by recommending or approving loans based on factors such as race, gender or location. Similarly, if the data is not diverse or representative it can lead to biased outcomes for individuals from underrepresented groups.
Ensuring AI models draw a distinction between correlation and causation is also important, as algorithms can exhibit bias when using proxy variables that indirectly correlate with protected attributes such as race or gender. For example, if an algorithm considers postcode as a factor for credit worthiness, it can introduce bias as certain neighbourhoods may be unfairly associated with specific demographics or socioeconomic conditions.
And what happens when wrong decisions are made – what happens if a customer wasn’t given a mortgage based on inaccurate information? What recourse does the customer have and who is it against – the company providing the loan? The data set provider? The AI provider?
Separately, serious consideration needs to be given as to whether the reliance on AI algorithms in financial services could create new types of systemic risks. How can we ensure the stability and resilience of financial systems in the face of AI-related vulnerabilities, such as algorithmic trading or interconnected AI-driven platforms? For example, it is conceivable that AI coordinating trading decisions in investment management suddenly decide to dump a certain stock at the same time, resulting in market making moves – how would this then be regulated?
Let’s take part in a thought experiment – what happens if the above happened because independent AI models made the same buy/sell decision because they have access to the same data, analyse it in a similar way and operate at the same speed? Should this still be regulated / protected against?
And to take it a step further – what happens if the market making moves were a result of AI deciding market manipulation enables them to meet profit targets? Would this then change your mind around how AI is regulated?
Join the Financial Services Faculty
Gain sector-specific technical support and expert opinions in banking, insurance, and investment management to keep you up to date in a fast-changing environment.
Discover more from ICAEW Insights
Insights showcases news, opinion, analysis, interviews and features on the profession with a focus on the key issues affecting accountancy and the world of business.
Hear a panel of guests dissect the latest headlines and provide expert analysis on the top stories from across the world of business, finance and accountancy.Find out more
News in brief
Read ICAEW's daily summary of accountancy news from across the mainstream media and broader financing sector.See more
Stay up to date
You can receive email update from ICAEW insights either daily, weekly or monthly, subscribe to whichever works for you.Sign up