Accounting has embraced algorithmic tools with remarkable speed. Machine learning systems flag anomalies. Natural language processing extracts contract terms. Predictive models forecast cash flows and credit risk. The efficiency gains are real and measurable. Work is processed faster, more consistently, and at greater scale than ever before. For busy practices under margin pressure, this progress can feel unambiguously positive.
Yet beneath these gains, the same systems that enhance speed and coverage may be steadily eroding the deep professional judgement that distinguishes expert accountants from automated processing. This is not because algorithms are flawed, but because of how they reshape the cognitive conditions under which judgement develops.
Understanding this shift matters for firms that wish to preserve long-term expertise, not just short-term efficiency.
Prefer to listen?
This audio file was produced by AI and has been adapted from the original article for audio purposes.
The ‘Algorithmocene’ and the attention economy
We are entering what some researchers describe as the ‘Algorithmocene’: a period in which algorithms actively shape how attention, effort and judgement are distributed.
For accounting practices, this manifests as an internal attention economy. Every tool competes for cognitive resources. For example, email notifications demand immediate response; dashboards pull focus toward reactive monitoring; and automated flags shift attention from pattern recognition to exception handling.
While each individual tool is useful, collectively, they create an environment optimised for rapid switching rather than sustained depth.
This matters because professional judgement does not develop under fragmented attention. It emerges through prolonged engagement with complex, ambiguous material, where patterns are noticed gradually and intuition is shaped through repetition.
Judgement needs focus
Professionals who trained before widespread algorithmic mediation developed their expertise in environments that assumed sustained focus. Those entering practice more recently have developed their cognitive habits in environments of continuous interruption and novelty. This is not a question of work ethic or motivation. It reflects genuine cognitive adaptation to the conditions in which skills are learned.
When anomalies are always pre-flagged, junior staff may never develop the perceptual sensitivity that comes from scanning full statements themselves. When disclosure language is auto-generated, opportunities to internalise materiality judgement are reduced. When classifications are suggested by default, exposure to boundary cases diminishes.
Consider a common scenario: a consolidation review interrupted by emails, chat messages and dashboard alerts. Over half the time may be consumed not by analysis, but by the cognitive overhead of switching tasks. More importantly, the work may never reach the depth required to surface subtle issues.
Across an organisation, this becomes structural. Young professionals rarely experience uninterrupted analytical periods long enough to develop deep judgement.
The algorithmic trade-off
None of this implies that algorithmic systems are undesirable. On the contrary, they excel at tasks humans struggle with, such as large-scale pattern recognition, consistent rule application, and tireless monitoring.
The challenge is not whether to use these tools, but how to deploy them without degrading the human capabilities that matter most in professional services.
Research on human–AI collaboration consistently shows that the best outcomes arise when systems are designed around human cognitive architecture, rather than simply automating tasks end-to-end.
In practice, this means treating algorithms as partners in judgement development, not replacements for it.
Practical strategies for preserving judgement
Organisations navigating these dynamics can take practical steps:
- Audit your attention economy: Map sources of interruption and distinguish what is genuinely urgent from what is merely habitual.
- Create protected depth periods: Designate time for complex analytical work with minimal interruption – even a few hours a week can make a meaningful difference.
- Make algorithms transparent: Where systems flag items, expose the underlying patterns to turn automation into a learning tool.
- Preserve learning cases: Deliberately withhold automation on selected engagements so junior staff build pattern recognition rather than consume outputs.
- Measure judgement, not just speed: Track indicators such as quality of reasoning, ability to identify novel risks and client perceptions of technical depth.
- Design roles around cognitive strength: Accept that professionals trained in different eras bring different capabilities and structure teams accordingly.
Algorithmic systems will undoubtedly continue to advance. Professional services should design their practices to preserve the human capabilities that algorithms cannot replace.
Efficiency gains appear quickly in metrics. Judgement erosion does not. It surfaces later, through regulatory failures, missed risks and declining trust.
The challenge for accounting is not to choose between humans and algorithms, but to design environments where both strengthen one another. Practices that understand their internal attention economy and invest deliberately in judgement development will retain their professional edge in an increasingly automated world.
Accounting Intelligence
This content forms part of ICAEW's suite of resources to support members in business and practice to build their understanding of AI, including opportunities and challenges it presents.