ICAEW.com works better with JavaScript enabled.

Is the EU Artificial Intelligence Act too restrictive?

Author: ICAEW Insights

Published: 18 Nov 2025

The EU’s Artificial Intelligence Act (AI Act) fully comes into force in 2026, but could its regulatory approach put European companies at a disadvantage in the race for technological innovation?

Regulation on Artificial Intelligence (AI) is taking shape across the globe. The EU’s AI Act is the first-ever comprehensive legal framework on AI, which addresses the risks and challenges of AI and aims to position Europe to play a leading role globally. 

Much like GDPR set a precedent for data protection, “this pioneering legislation has the potential to establish a global benchmark for AI governance”, says Despina Spatha, Compliance Director and DPO at credit management software provider Qualco in Greece. But will its legislation clash with that of other jurisdictions and deter innovation?

“The AI Act has been generally welcomed in the EU as a pioneering regulatory framework,” Gianluca Campus, Director of Legal Operations, PwC Italy, says. “However, reactions have been mixed: civil society organisations mainly praise its risk-based approach, while tech companies and industry stakeholders express concerns about compliance costs and innovation constraints.”

The Act adopts a risk-based approach, classifying AI systems into four categories: minimal or no risk, limited risk, high risk, and unacceptable risk. Each category carries distinct obligations, with high-risk systems subject to the most stringent regulatory requirements, including risk management, human oversight, data governance, continuous monitoring and independent conformity assessments.

Business impact

The impact on business is not to be underestimated. Organisations will need to meet new compliance requirements around transparency, accountability, and fairness in AI systems. This may require significant adjustments to internal processes, governance structures, and risk management frameworks.

Spatha says that the Act has been received with “both caution and hope. On the one hand, it represents a significant step in the right direction toward establishing ethical norms and regulating artificial intelligence. Given the rapid pace of AI advancements, many view it as a crucial measure to safeguard both the public interest and fundamental rights.”

At the same time, one of the major challenges is the compliance burden the Act will place on business and the amount of resource that will be required to meet the Act’s requirements. Spatha adds that concerns have been raised about the Act’s complexity and the potential burden it places on businesses and organisations.

“Compliance may entail substantial administrative and financial challenges, leading some stakeholders to question whether the regulatory framework is overly demanding, particularly for start-ups and SMEs,” she says.

Given the Act’s imposition of significant obligations on organisations developing, deploying or distributing AI, particularly in high-risk sectors like finance or healthcare, Campus advises organisations to “adopt a proactive compliance strategy, integrating legal and technical safeguards to mitigate risks and maintain competitive edge and to avoid the heavy penalties that the AI Act imposes in case of non-compliance with its obligations”.

Fines for non-compliance can reach up to €35m, or 7% of a company's annual turnover, which can seriously affect an organisation.

Overlapping regulations

Although the comprehensive Act allows for a unified compliance strategy, companies will also need to navigate overlapping regulatory frameworks, such as GDPR, sector-specific AI laws, and national AI strategies, which introduce legal and operational complexity.

Another crucial challenge is that few businesses operate purely within a single jurisdiction these days, so regulatory conflict is likely to affect most companies. While a harmonised compliance approach is possible, it will require a deep understanding of cross-jurisdictional legal requirements.

Spatha says: “The risks for organisations are significant, not just in terms of financial penalties, but also operational challenges, reputational damage, and the ongoing task of ensuring thorough and effective compliance. Businesses will need to be proactive, well-resourced, and well-informed to navigate this new regulatory landscape effectively.”

Too restrictive?

The current thinking seems to be that the EU has positioned its AI rules as too restrictive, while the US is leaning towards the other extreme of being perhaps too relaxed.

Spatha agrees. “There is concern that the EU AI Act could be more restrictive compared to the approaches taken by other countries, which may put European companies at a competitive disadvantage. In a global market, where regions like the US or China have less stringent regulations, they may have the flexibility to innovate and deploy AI technologies more rapidly.”

Delays ahead?

In July, with less than a month to go before parts of the EU's AI Act came into force, representatives of big US tech companies, such as Google owner Alphabet and Facebook owner Meta, asked the European Commission to delay some provisions in the Act.

Spatha says that experts have identified notable gaps in the Act. “A key concern is the assumption that most AI systems pose ‘low to no risk’, which may not sufficiently account for the broader and evolving risks associated with AI technologies. Addressing these shortcomings is imperative to ensure the Act’s effectiveness in practice,” she says.

The Act is due to be fully implemented by August 2026, in the meantime though it has a whole raft of staggered deadlines for different implementation steps from this year. The rules for high-risk AI systems, embedded into regulated products, have an extended transition period until 2 August 2027.

As of 2 February 2025, the ban on AI systems posing unacceptable risks started to apply. Codes of practice for General Purpose AI Systems were published in July 2025, while rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the Act is effective.

The EU AI Act represents a major step towards regulating artificial intelligence in a way that prioritises ethics, fundamental rights, and public safety. But it also raises questions about how restrictive it might be in comparison to other approaches, which might put European businesses at a disadvantage in a global marketplace.

Esther Mallowah, Head of Tech Policy, ICAEW, says: “Regulation and innovation are often seen as opposing forces. But proportionate, well-designed and implemented regulation can provide the clarity and confidence that businesses need to innovate. While the EU AI Act may have its challenges, it does provide some degree of certainty on what is required of the various players in the AI lifecycle and it could be a sensible starting point for AI regulation. The expectation is that it will evolve as lessons are learnt from its implementation.” 

Accounting Intelligence

ICAEW has created a suite of resources to support members in building their understanding of AI, including opportunities and challenges.
Browse resources Masterclass videos
Cut out of laptop device with generative AI prompt screen.

You may also be interested in

Resources
Artificial intelligence
Artificial intelligence

Discover more about the impact of artificial intelligence and the opportunities it presents for the accountancy profession. Access articles, reports and webinars from ICAEW and resources from tech experts.

Browse resources
elearning
GenAI Accelerator

Gain the skills to harness the power of GenAI, transforming the way you work and positioning yourself as a leader in the industry. Don't just keep up with change - drive it.

Find out more Enrol
ICAEW support
A person holding  a tablet device displaying various graphs
Training and events

Browse upcoming and on-demand ICAEW events and webinars focused on making the most of the latest technologies.

Events and webinars CPD courses and more
Open AddCPD icon