ICAEW.com works better with JavaScript enabled.

How social media could affect your credit score

Writing for the Financial Services Faculty, journalist Pippa Stephens looks at how privacy changes could affect credit scores and lending decisions for consumers and business.

How often do you call your mother? Maybe it’s a once a week, or once a month. Or maybe you don’t call that much but you text a lot. Perhaps the contents are nonsense; video clips of stark-naked German men chasing after wild boar. Perhaps they are more serious.  

Regardless, most people would agree it’s a private matter; I don’t know how many times most of my friends call their mothers, or what they write to them in text messages. I don’t really care.  

But what if that detail about you influenced whether you could get a loan. It could be the deciding factor in whether your child could go to university, or in saving your family from eviction by paying your rent on time.  

This isn’t a hypothetical situation - it has already happened.  

An investigation by Privacy International found lending apps such as Tala and Branch used data from the widely adopted M-Pesa mobile money transfer service.  

The data revealed factors such as the contents of their SMS's and how often M-Pesa’s customers rang their mothers based on the call records of their phones. These elements were analysed to make lending decisions.  

The apps also considered whether a person’s contact book contained the surnames and first names of their acquaintances. 

It’s big data like this that’s posing a challenge to the financial services industry, as the growing sector increasingly makes use of statistical patterns and the technology that teases them out.  

Coupled with it is the family of machine learning techniques sometimes loosely referred to as artificial intelligence [AI]. AI works with the enormous store of data people generate through websites and apps to make sense of it. But AI isn’t perfect. It comes with its own set of prejudices that some fear will deepen inequalities in our society.  

And the data points it could potentially work with are more intimate than ever.  

NerdWallet’s surveillance  

As well as being personal, the volume of data generated by fintech is broad. This becomes apparent rootling through a handful of financial services apps’ privacy policies.   

The range of information collected is surprisingly eclectic.  

It’s hard to shake the feeling that should this data get into the hands of banks credit scoring agencies, or insurers, it could leave some customers uncomfortably and unwittingly exposed.  

In America for example, the personal finance app and website NerdWallet helps its users with “expert money advice, helpful tools and tailored insights”, according to its website.  

It collects, amongst other things, the web pages users visit just before and after using it; their type of device, geolocation data, hardware and software settings and configurations.  
The privacy policy states that the app may share requests by its users for answers or advice via the app or website with “third party experts, advocates and advisors”.  

Credit Karma’s web bugs 

The questions here are likely to be highly revealing about a person’s finances - and could be of great use in credit scoring.  

NerdWallet didn’t respond to questions about what checks it carries out on these external companies or professionals in relation to how they use and store the data, or why it needs to know about webpages visited before or after using the app.  

Another US-based app and website, Credit Karma, says it lets its users compare personalised offers for “credit cards, loans and more”.   

It collects records of browsing behaviour such as cookies and web beacons which are ostensibly image files that monitor a device’s activity on a website.  

The privacy policy states one use of web beacons is targeted advertising. It doesn’t state how they are used - for example whether they are placed by the company, or whether the company uses pre-existing ones.  

Credit Karma also shares users’ credit reports and scores with third parties, for reasons such as marketing, or making personalised recommendations. It shares users’ personal information with its partners when the users click through to their sites.  

It declined to respond to a request to explain why web beacons are collected, and what else they’re used for, or how it could be sure the agents acting on its behalf would in no way share the data in a way that influences lending decisions. 

Technologies target ‘most vulnerable’  

Financial service apps and websites are only one source of data, others are social media and shopping habits.  

The data points used to make decisions about our lives are “increasingly broad”, says Tom Fisher, senior researcher at Privacy International.  

He said a consumer’s location, the device they use and even the length of time they spend reading the terms and conditions can all impact how institutions see consumers.   

The age of the device being used to fill in an application can be as important as the contents of what is said on the form, he said.  

Mr Fisher added: “The products that use these technologies are often aimed at the most  
vulnerable in society, as they can lack formal credit files.” 

He also highlights another issue, how most of us fail to read the privacy policies of financial products.   

The researcher cited a study by the London School of Economics and Political Science which suggested consumers were not giving their informed consent, as the policies were too long and legally complex.  

Mr Fisher said the research revealed that “few read the privacy policies of these types of financial products; even fewer understand what they say, and fewer still appreciate the full implications of the use of their data”. 

Facebook rebukes Admiral  

The data on social media is appealing for the financial services industry. It’s out there, and it’s public.  

It’s one of the reasons Facebook was called in recently along with Amazon, Apple and Google to be grilled by members of the US Congress in Washington.  

Lawmakers interrogated the bosses of these companies - collectively worth $5tn - about their collection of data, amongst other issues.  

Some committee members are concerned the companies may constitute a threat to consumers, in part because of their size and market influence.  

But attempting to mine social media for financial insights can be risky.  

The British insurer Admiral developed an app which aimed to let the insurer see users’ Facebook posts to try and find out about their personality traits, to help set car insurance premiums.  

But the trial was pulled at the time of its launch, as Facebook refused Admiral access, citing the privacy of its users.  

It has a platform policy which prevents use of its data to make judgements about eligibility, including for credit. All potential apps go through a testing stage where Facebook makes sure they comply with this platform policy.  

In an email to the ICAEW, a spokesperson for Facebook said it has acted before to stop any app and software developers who violated its policy from being able to access it. 

Data’s hidden discrimination  

Attempts by individual companies aside, there remains an increasingly intricate map of personal data out there.  

And there is a danger this data could be used to make biased judgments based on sensitive aspects of our identities, such as ethnicity or gender, said Philippa Kelly, director of the technical strategy business group at ICAEW’s Financial Services Faculty. 

“A lot of big data either speak to a protected characteristic or are a proxy for a protected characteristic,” said Ms Kelly.  

This leads to hidden discrimination within data, she said.  

Algorithms could deny people of a particular ethnicity credit because they are concentrated in a particular geographic area, added Ms Kelly.  

The data “tells us things we already know but then entrenches that into a system,” she said, which risks further entrenching inequality in our society. 

Ms Kelly said the level personal of data is also likely to make it harder to insure people, or groups of people.  

“Insurance depends on us being similar enough people, exposed to similar types of risk, and that’s how the system works,” she said.  

But the more we are able to individualise people through health data, with Apple watches, or genetic data from genealogy tests, the harder it becomes to group people together, which undermines the purpose of insurance, added Ms Kelly.  

Machines learn from history  

Insuring people is one difficulty arising from the hive of data that now exists about us.  

The implications of using algorithms, touched on by Ms Kelly, are another that has received more public scrutiny.  

For example, use of artificial intelligence in the US criminal justice system meant black defendants were wrongly tagged as likely to reoffend at almost twice the rate of white people, according to an investigation by ProPublica, a non-profit journalism website.  

According to technologist Vivienne Ming, this kind of scenario can arise because “algorithms don't know right from wrong; they just turn numbers into numbers”.  

Algorithms generalise from history, she said.  

“They map all of the hidden relationships in masses of data and then fix those old patterns to new numbers. When developed well, these systems can make many complex, expert-level decisions cheaper and faster than humans.  

“However, even the most sophisticated algorithms don't understand the problems to which we apply them,” added Ms Ming.  

‘Massive shift’ in power  

The formulae encode historical bias as they are using historical data, she said. And more than that, for machine-learning reasons, they actually exaggerate bias, she added.  

Ms Ming said this particularity was especially problematic in financial services due to the amount of data used across the industry, and the fact it often contained racial and gender attributes.  

“They [the data] also include postal code preferences, because a neighborhood's history (Black, Irish, poor) continues to reverberate through financial data from days of explicit, intentional discrimination all the way through to tomorrow,” she added.  

Another challenge in using AI came from outliers, she said, as the codes only understand what they have seen before. 

Ms Ming said: “In the loan case, the algorithm will fail to recognise novel risks or new opportunities. It would almost definitionally fail to invest in truly innovative businesses.” 

A final consideration, said Ms Ming, was that even when the algorithms were working according to plan, they were designed to maximise the interests of the companies over consumers creating a “potentially massive shift in power” towards the financial industry.  

How to make data work  

Luckily, we are not quite there yet. But it is coming. Good practice now can help prevent firms making expensive mistakes by developing apps that can’t be released, like Admiral, and from customers being priced out of the market.  

There’s a strong argument that more diverse management teams will be more likely to spot the warning signs of algorithmic bias.  

A recent ICAEW report recommended companies remember accountability for big data lies with senior management, regardless of the use of third-party technology, and how complex it is.  

Another good bet is to make chief data officer a senior management role. This person would ensure boards and companies have the right skills and knowledge to support their use of big data and AI, said the study.  

Companies should also be pragmatic about the importance of experience and emotion in decision-making, compared to data and algorithms, and try always to see things from the customer’s perspective, and consider if any decisions use protected factors, it said. 

A dystopian future where we are shackled financially because of our tardiness as sons and daughters is not a given. But thinking carefully about the use of AI and big data is the only thing that will stop it being so.  

Our ability to reason by pulling on a range of factors, some rational, some not, is what makes us human, and only by doggedly clinging on to it can we can make sure machines keep working for us, rather than the other way around.  

About the author:

Pippa Stephens is a freelance journalist in Berlin with over a decade of experience at FT business magazine Pensions Week, BBC News, BBC Business and the World Service.