With companies like Google and Amazon building hugely successful business models based on the exploitation of data, how relevant is this trend for other, perhaps smaller, businesses? Kirstin Gillon investigates.
The term ‘big data’ reflects the enormous volumes of data that we are now generating in the world. But big data is about more than just size. It’s also about exploiting new sources of data, making better use of unstructured data, such as images and free-form text, and about making use of data in a real-time manner. This is often referred to as the ‘3 Vs’ of big data – volume, velocity and variety. There are three broad ways in which businesses can exploit this wealth of new data and associated analytical tools. First, they can use it to gain new insights about customers, operations or other aspects of performance and strategy, including:
- using new sources of data to gain deeper understanding, for example using more granular data about customers to understand their preferences, activities and location;
- exploiting the real-time nature of big data to improve services and operations, for example through personalising responses and offers; and
- applying analytics to gain new insights and interrogate entire data sets, eg:
- recognising new associations and patterns;
- linking data from disparate sources to gain new insights; and
- identifying exceptions, unexpected behaviour and outliers.
Second, these tools can predict future outcomes more accurately and enable businesses to embed predictive models into business operations. These models are based on identifying correlations and patterns in data and using probability algorithms to identify the most likely future outcome in any given situation. While such techniques are not new – ‘machine learning’ models date back to the early days of artificial intelligence – many of these models have become far more accurate in their predictions and therefore far more usable by businesses. This has been driven particularly by the enormous growth in data, both in terms of sources of data and sheer number of data points, which enables models to run more times and hone the patterns and probability statistics. As a result, greater reliance can be placed on models.
Third, businesses can build on these predictive capabilities to automate non-routine decisions and tasks. This is reflected in increasing automation of professions such as law and medicine. For example, healthcare companies are starting to exploit machine learning techniques to automate medical diagnosis. In the legal profession, models can scan through vast amounts of potential evidence much more quickly and accurately than humans, leading to more automation of the discovery phase of cases.
All of this begs the question of how accountants and finance functions can exploit these trends. In many cases, these capabilities are an extension of existing activities rather than anything radically different. So, accountants may be able to get greater insight into cost and revenue drivers, for example by using more granular data. Analysing outliers and exceptions should enable better targeting of resources. Using new sources of data, such as text, may enable better control of activities in the organisation and better risk management.
Prediction is a feature of most activities in finance functions, such as strategy, risk management, funding, management and control, compliance and investment appraisal. Consequently, there may be a greater role for using and embedding predictive models in decision-making processes. Forecasting and budgeting processes present opportunities here, but areas such as fraud prevention have already been greatly improved by more sophisticated analytics.
Managing the risks
These new capabilities can be extremely powerful, but there are also well-established risks. Management information systems have always been hampered by data which is inaccurate, inconsistent, duplicated or out-of-date. These problems can be significantly amplified by big data, as many of the new sources of data can be unreliable or become outdated very quickly, such as social media.
Traditional responses to poor-quality data emphasise cleansing data, or disregarding it entirely where the quality is very bad. But big data commentators argue that the sheer volume of data makes granular quality far less important. Analysis will still show the general trend, even if individual data items are of variable quality. In order to make the most of opportunities with the data, it is argued, we instead need to work some degree of ambiguity around the accuracy of some data.
However, there is a trade-off between data volume, speed and granular quality, and there will be different conclusions depending on the specific context. Where data is being relied upon to make important decisions about specific individuals or organisational resources, ensuring appropriate levels of quality is likely to remain vital. By contrast, where analysis aims to identify trends, or respond quickly to customer demands, some data inaccuracy is more likely to be acceptable. Decision-makers need to understand the standard of quality required in different contexts and ensure that the data used meets that standard.
There are also many dangers around predictive models. While such models can be very powerful, there are many traps to avoid in ensuring that the output is accurate. Poor-quality data, for example, or cherry-picking of data, will skew results. Models are built on assumptions that need to be regularly checked and tested as conditions can change. Statistics can be misunderstood – averages can hide great extremes, for example. Therefore, care needs to be taken and models need to be subject to robust challenge to ensure that they are appropriately relied upon.
The biggest challenge for most organisations in getting started with big data is to work out what data to use and how to use it. For smaller businesses in particular, the concept of big data may appear to be irrelevant, as they may not have very much data.
Rather than focusing on ‘data’, businesses can start identifying questions where the answers will help them be more successful. While there can be value from experimenting with data and seeing what turns up, the broad advice from experts is to frame good questions and focus analysis on answering those questions. For finance functions, relevant questions might include: what prices should we charge for our products and services, how can we maximise cashflow and how can we better prioritise and manage business risks?
Once the questions are clear, organisations can then identify all the possible sources of data that could help to answer those questions. Some of those may be existing sources of internal or external data. Some of the data may not exist but could easily be collected by tweaking systems or processes. Some data may need longer-term planning and investment to collect and therefore require some cost-benefit analysis.
Building the right team
One of the key elements of successful data projects is that they involve teamwork and bringing together a variety of skills, such as:
- skills in IT and data, such as extracting, cleaning and integrating data from different sources
- skills in statistics to build algorithms and models; and
- skills and knowledge in the specific business domain to ask questions and interpret the answers given by the models.
Different organisations have taken different approaches to the challenge of building the right skills and capabilities. For some it may make more sense to buy skills in from third-party suppliers rather than building internal capabilities. Some businesses create specialist teams in the centre which support specific projects in different parts of the business, ensuring the sharing of knowledge and good practice. In other cases, expertise is being built up in the areas in which big data analytics is primarily being exploited, typically marketing or operations, depending on the business.
This raises questions about the role of accountants and finance functions and the skills that they may need. While there may be opportunities to make use of new sources of data and models to improve activities such as forecasting, internal controls and risk management, accountants may also play a greater role as the ‘conscience’ behind models, providing robust challenge on the data used, assumptions made and the quality of the output. However, for many accountants, greater statistical skills will be needed for this.
While there is a lot of hype around big data and sophisticated analytics tools and techniques, they do present many opportunities for businesses to improve their operations, customer management and risk management. In some cases, they enable new and disruptive business models. However, a great deal of care is needed to ensure models are understood and used correctly.
Accountants may have the opportunity to get involved, both in using big data and analytics, and in providing robust challenge to their use. They need to identify good questions about the business and think about how new data might provide help to answer them.
About the author
Kirstin Gillon is technical manager in the ICAEW IT Faculty.
Download pdf article
The ICAEW Library & Information Service provides full text access to leading business, finance and management journals and a selection of key business and reference eBooks.
Further reading on big data is available through the resources below.
Can't find what you're looking for?
The ICAEW Library can give you the right information from trustworthy, professional sources that aren't freely available online. Contact us for expert help with your enquiries and research.
- 04 Feb 2015 (12: 00 AM GMT)
- First published
- 04 Nov 2022 (12: 00 AM GMT)
- Page updated with Further reading section, adding related resources on big data and data analytics. These new resources provide fresh insights, case studies and perspectives on this topic. Please note that the original article from 2015 has not undergone any review or updates.