It’s so common I’ve seen risk and audit leaders do it. On paper the control looks fine: set limits, define approvers, add monitoring. In practice it leaks, because people game it.
This isn’t about procurement. It’s about people, and why neat controls don’t work when real life gets in the way. How aware is your internal audit function?
Why people bend the rules
The vast majority are just trying to get the job done. In the case of splitting purchase orders, the motives are familiar: approvals take too long, thresholds feel arbitrary, more senior approval adds little value, and “everyone does it.” If the system makes it easy to raise multiple orders and there’s no challenge, splitting becomes normal.
We’ve seen many similar examples. Expense claims are embellished because line managers rubber-stamp when busy. Annual supplier security reviews degenerate into form-filling when questionnaires are long and applied indiscriminately. Rules to change passwords frequently backfire when staff write them down. Segregation of duties - the sacrosanct split between initiator, approver and recorder - collapses when “temporary” overrides become routine.
In each case the design looks neat; the reality is messy.
Behavioural drivers
Behavioural science explains why. People dislike friction. They cut corners when the right way takes too long, like walking diagonally across a lawn ignoring the “keep off the grass” sign. Do we build this into our internal audit programme? Do we have the behavioural science expertise to do so?
Incentives matter. Pressure to deliver outweighs pressure to comply. And knowledge (i.e. training) does little. Proofpoint, a cybersecurity firm, reports that 96% of users who knew a link was risky still clicked on it. Knowledge is not the same as behaviour.
The science says two cognitive biases reinforce this. Present bias makes people prefer convenience now over risk reduction later. Status quo bias keeps them repeating unsafe but familiar workarounds – when I worked at a petrochemical and refinery site in France many years ago, the infamous Generale Comme-Avant (“as before”) was always in charge.
So people just follow the path of least resistance, and controls that work in theory fail in practice.
Designing for humans
Of course that doesn’t mean abandoning controls. It means control activities must be designed in the context of human behaviours and understandable bias.
Four simple questions help:
- Distance: is the control where the task happens, or buried three clicks away? Overhead: how many steps does it add at the critical moment?
- Goals: which performance targets will people try to game?
- Experience: what norms, fears or incentives could shape behaviour?
From there, improvements are usually modest. Put controls in the workflow, not bolted on afterwards. Make the safe path faster and easier than the risky one. Use “nudges” at the moment of choice: if multiple low-value orders look like a split, ask the requester to confirm otherwise. Shorten forms. Default to compliance unless a person actively chooses the alternative.
Above all, rapid feedback is critical. People change behaviour more from seeing anomalies flagged and corrected than from posters, policy statements or training.
Where does this appear in most audit programmes?
How can we anticipate how people might respond?
Here I found lessons from government campaigns instructive. A UK Government paper A behavioural approach to anticipating unintended consequences urges policymakers to use their IN CASE framework to consider six behavioural lenses. I think three of these – how people compensate, what controls signal, and what emotions they evoke – help us consider the appropriateness of controls and how we might guide our organisations to make them work better in real life.
Take purchase order approvals. Tightening thresholds sounds sensible until you look at what really happens:
- Compensatory behaviour: staff split orders to stay below the threshold, or use corporate cards to bypass delay.
- Signalling: the extra approval sends a message of distrust, making people hide workarounds.
- Emotional impact: people feel blocked and blamed so frustration replaces accountability.
The fix isn’t tougher policing but smarter design. For example:
- Approve project budgets, not transactions: give managers an upfront spending envelope they can draw down. It removes the incentive to split and signals trust.
- Spot and nudge, don’t police: use analytics to detect clusters of small orders and prompt: “These look related, combine them?” Neutral tone, one click.
- Show the queue time: display average approval times and live status so users see progress and stay patient.
- Fast lane for low-risk buys: catalogue items auto-approve; outliers get scrutiny.
- Add a one-line intent check: when someone raises multiple small orders, ask: “Is this part of a larger need?” The act of answering slows gaming.
- Change the story: drop compliance language. Say: “Bundling related needs helps us buy faster and at better prices.” That signals partnership, not suspicion.
The result is a control that still prevents unauthorised spend but also builds trust, reduces friction and lowers the urge to game the system.
The governance test
This is no longer a nice-to-have. The UK’s Corporate Governance Code requires Boards, from 2026, to attest to the effectiveness of material controls, not just their existence. That sets a higher bar.
“Designed and operating” is insufficient if the design ignores human behaviour. A control that fails under pressure isn’t effective.
Boards should press for evidence. Internal audit should provide it.
Which behaviour-driven failure modes could undermine our material controls? What proof is there that design anticipates them? Where do we still rely on policing rather than embedding compliance in workflow and defaults? And how is internal audit adapting its assurance methods to test these behavioural dimensions?
The real measure
Controls only work when people make them work. That requires design that fits human psychology, not wishful thinking.
The real test is simple: does this control still work when staff are rushing to meet a deadline, trying to help a colleague, or struggling with clunky systems?
If the answer is no, it was never well designed in the first place.