ICAEW.com works better with JavaScript enabled.
Exclusive

The science of business guessing

Article

Published: 13 Jun 2017 Updated: 15 Nov 2022 Update History

Exclusive content
Access to our exclusive resources is for specific groups of students and members.

Sometimes it’s not possible to get more information, do a proper spreadsheet analysis, or calculate a forecast or estimate – we just have to guess. Matthew Leitch asks what science can teach us about how to guess well

One major insight from the countless studies of decision-making and judgement that are relevant to this question is that we sometimes guess well and sometimes guess badly. We do well in familiar situations without too many factors to consider. In some tasks, just saying the first thing that comes to mind is the right approach. For example, in thinking about what is most likely, the first thing that comes to mind is often what has happened most in the past and so is most likely to happen again.

In other tasks, our limitations are painfully obvious, with logical mistakes and persistent, sometimes large, biases easily demonstrated. We are terrible at combining several factors to reach one estimate or judgement. We are worse when we feel pressure from others to say what they want to hear. Our misjudgements are magnified by uncertainty in unfamiliar situations.

Unfortunately, we are often forced to guess in situations that are far from ideal. Perhaps there is no opportunity to get more information. Perhaps there isn’t time to think carefully and make a model with calculations, or perhaps colleagues won’t take any notice or do their best to discourage that kind of analysis.

Even with modelling and data gathering banned, there are some tested techniques that can help. Since many guesses are made in meetings it is especially useful to know how to guess in a group.

We are terrible at combining several factors to reach one estimate or judgement. We are worse when we feel pressure from others to say what they want to hear

Matthew Leitch Business & Management Magazine, June 2017

With other people

When we are asked to give a range for an uncertain quantity so that we are 90% sure the truth lies within our range, we tend to give ranges that are too narrow. This is revealed by the fact that the truth lies outside our range much more than 10% of the time. This is often called over-confidence and is a pervasive bias we cannot feel and can rarely counter directly.

Professor Scott Plous compared several techniques for counteracting over-confidence using four related experiments. The subjects had to answer difficult quiz questions (e.g. what is the diameter of the moon?) by giving ranges of numbers such that they were 90% sure the correct answer lay within that range. The best technique was to ask each member of a three or four person group to write down their answer individually, without conferring, then take the highest upper limit given by any person in the group, and the lowest lower limit given by any person in the group. This often meant the two numbers did not come from the same person.

With this technique, groups showed minimal over-confidence in their answers and in assessing their own performance before finding out their scores. Their performance was dramatically and consistently better than that of individuals working alone and groups who discussed the questions. This was true even when those groups used a variety of strategies to try to counter over-confidence, such as appointing a devil’s advocate or deliberately considering reasons why their interval estimates might be too narrow. Direct instructions to avoid overly narrow ranges did not work and groups that discussed their answers were little better than single individuals even though they thought they performed much better.

This is a specific example of the more general strategy of getting the views of group members individually, then combining them. Taking the average of the group (as well as the range) can give more accurate averages too.

One potential problem is that someone might suggest a number out loud before others have made their estimates. This could produce a large bias across the group thanks to the well-known anchoring effect, which is also hard to feel and counteract directly. Try to prevent people from mentioning any numbers at all before everyone has written down their personal views. If a number is mentioned too soon you might reduce the impact by immediately suggesting several other numbers, ranging very widely. Even a number mentioned as a guess is dangerous and in one study numbers selected at random using a spinning wheel in plain view still produced a substantial bias!

Estimating effort

In addition to being over-confident in our estimates of quantities we also tend to take a rosy view of the future, imagining things will turn out better than they really do on average. This combination of biases lies at the heart of the Planning Fallacy that threatens ventures ranging from major international civil engineering projects to estimates we make of the time it will take us to do tasks that last just hours.

One useful observation is that when we are thinking about how long we will take we focus on the steps of our plan and tend to ignore our past track record and other relevant factors. In contrast, when asked to estimate how long someone else will take to do the task we take their track record into account more often and make longer estimates as a result. So, for a less rosy estimate of how long it will take you to do something, choose someone with no reason to pressure you and ask them to estimate for you.

A more obvious strategy is to break the task down, make guesses for each component, and add them up. Does that work? Research on this is inconclusive and it seems that it sometimes works better than just making an overall estimate and sometimes is worse. Sometimes we find it easier to estimate for the components because they are more familiar, but breaking the task down this way can draw us into the Planning Fallacy, whereas an overall estimate encourages us to think back to previous, similar projects and the range of outcomes.

If nothing else, decomposition probably helps reduce the impact of uncorrelated estimation errors, but consistent underlying bias remains.

Magne Jørgensen, of the Simula Laboratory in Norway, has studied effort estimates in software projects extensively and suggests combining independently made overall and decomposed estimates. Another recommendation is to decompose the project in more than one way and compare estimates made with each.

In practical terms this might involve asking one colleague to make a private estimate overall, ask another to break the project down into time phases and make estimates for each of those, and ask another to perhaps do it by cost category. The average and top-to-bottom range of those might be useful information, even if there’s not much detail and a spreadsheet is not used.

Multiple factors

Another troublesome area for human judgement is where many factors have to be taken into account in one evaluation or estimate. For example, the task might be to evaluate candidate employees or suppliers and choose the best, or evaluate sales leads and choose the most promising for urgent action.

We tend to be inconsistent at this and find it tiring. In a classic paper from 1979, Robyn Dawes reviewed the evidence comparing human combination of factors to combination using “linear models” and demonstrated a remarkable conclusion.

A linear model is simply one in which each of the factors in the decision is represented as a number and then those numbers are multiplied by weights and added together.

Ideally, the factors should be combined in a way that makes theoretical sense, such as multiplying or combining in some other way, but linear models just feature adding.

Ideally, each factor should be carefully transformed so that it can be treated as having a linear relationship with the result of interest, but the linear models Dawes studied did not have that refinement. Ideally, the weights should be carefully determined so that each factor is given the appropriate importance.

And yet, despite the lack of validity and sophistication in the models used, they still outperformed human judgement on a wide variety of tasks even when the weights were assigned at random. The provisos here are that the factors should all be scaled to have approximately the same range, and they should all have a “monotonic” relationship with the outcome of interest. That means that for a factor that seems to be a helpful one, more of it should always be better than less.

This sort of task usually involves using numbers already available, but in many business evaluations we have a lot of unstructured information and need to reduce it to factors to consider. Quantification may be tricky or unwelcome so a linear model may not be feasible. However, the next best thing is to take a break.

Specifically, having considered the problem as deeply and as broadly as you can, take five to 10 minutes away from the task, resting or doing something else, then come back to it and make your selection quickly based on how you feel. This seems to help people reflect all the factors rather than just the last factor they were thinking about, and that helps with the overall quality of evaluations.

This only helps if there is no over-riding factor that allows you to conclude without considering all or most of the factors, and benefits may be more likely when you are not really expert in that particular evaluation. For example, a medical expert working on a complex diagnosis may be better off taking a rest then thinking carefully again rather than just trusting to gut.

Guessing, however scientific, will not be better than well informed, carefully considered, quantified analysis. However, guessing is needed so often we need to be good at it

Matthew Leitch Business & Management Magazine, June 2017

Conclusion

Guessing, however scientific, will not be better than well informed, carefully considered, quantified analysis. However, guessing is needed so often we need to be good at it, and understand how to get the best from ourselves. The discussions that dominate most business meetings are not always the best way to reach conclusions and there are times when it is best to stop talking and have people write down their personal estimates without being influenced by others. Diverse views are valuable, but not if we always push ourselves for consensus, and not unless we elicit them systematically and use them fully.

Download pdf article

Further reading

The ICAEW Library & Information Service provides full text access to leading business, finance and management journals.

Further reading on business predictions and forecasting is available through the articles below.

Terms of use

You are permitted to access articles subject to the terms of use set by our suppliers and any restrictions imposed by individual publishers. Please see individual supplier pages for full terms of use.

More support on business

Read our articles, eBooks, reports and guides on Financial management

Financial management hubFinancial management eBooks
Can't find what you're looking for?

The ICAEW Library can give you the right information from trustworthy, professional sources that aren't freely available online. Contact us for expert help with your enquiries and research.

Changelog Anchor
  • Update History
    13 Jun 2017 (12: 00 AM BST)
    First published
    15 Nov 2022 (12: 00 AM GMT)
    Page updated with Further reading section, adding related articles on business predictions and forecasting. These additional articles provide fresh insights, case studies and perspectives on this topic. Please note that the original article from 2017 has not undergone any review or updates