A new Code of Practice designed to make it easier to comply with the EU AI Act has been published by the European Commission.
Unveiled on 10 July, the Code is the product of a year-long development process, led by 13 independent experts, that gathered input from more than 1,400 stakeholders. Those contributors include industry figures, academics, civil society groups, rightsholders and member states represented on the EU AI Board.
Under the full title of the General-Purpose AI (GPAI) Code of Practice, the voluntary system is split into three chapters:
- Transparency Offers providers of general-purpose artificial intelligence (AI) tools a Model Documentation Form, enabling them to easily compile any information about their products that will help them meet obligations for openness under the Act.
- Copyright Sets out how signatories should draw up, implement and maintain a policy for complying with EU laws on copyright and related rights, covering all general-purpose AI models they place on the European market.
- Safety and security Outlines state-of-the-art practices and procedures for managing systemic risks linked to more advanced general-purpose AI models. In light of that focus, this chapter is mainly relevant to a small subset of AI providers.
Streamlined process
Commenting on that third chapter, the Commission noted in a press release: “Some general-purpose AI models could carry systemic risks, such as risks to fundamental rights and safety, including lowering barriers for the development of chemical or biological weapons, or risks related to loss of control over the model. The AI Act mandates that model providers assess and mitigate these systemic risks.”
In a special FAQ, the Commission explains that signing up to and following the Code will offer AI providers “a simple and transparent way to demonstrate compliance” with the Act. As enforcement will focus on monitoring providers’ adherence to the Code, the Commission says, the streamlined compliance process will result in greater predictability and a reduced administrative burden.
Crucially, the FAQ points out, the Code does not contain any further requirements for providers to observe. Rather, it notes: “It serves as guidance to help providers meet their existing obligations under the AI Act without creating new ones, extending existing ones, or imposing additional burdens. It serves as a voluntary tool that helps providers demonstrate compliance with the AI Act’s binding provisions, without exceeding its scope.”
Further guidelines
In terms of next steps, the Commission and member states will assess the adequacy of the Code ahead of making formal endorsements. Meanwhile, all providers of general-purpose AI products with current or planned EU operations will be invited to adhere to the Code, with the AI Office poised to announce details “soon” on how to sign up.
As a complementary measure, the Commission will issue guidelines on general-purpose AI, to be published before the entry into force of relevant obligations in the EU AI Act. The guidelines will clarify who falls in and out of scope of the general-purpose rules.
Turning to near-term compliance, the Commission stressed that providers placing general-purpose AI models on the EU market must meet their respective AI Act obligations from 2 August this year. Any providers who are planning to release models with potential systemic risks must notify the AI Office without delay.
Important step
In the first year of the new regime, from 2 August onwards, the AI Office will offer to work closely with providers – particularly Code signatories – to ensure they can continue to send models to market in a timely fashion.
If those providers do not fully implement all Code commitments immediately after signing up, the AI Office will not consider them in breach, nor reproach them for violations. Instead, it will consider them to be acting in good faith and will stand ready to collaborate with them to find ways of ensuring full compliance. However, from 2 August 2026, the Commission will enforce full compliance with the general-purpose obligations, on penalty of fines.
Providers of AI models placed on the market prior to 2 August this year must comply with their obligations under the Act by 2 August 2027.
In a statement, Henna Virkkunen, Commission Executive Vice-President for Tech Sovereignty, Security and Democracy, said that the publication of the Code “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent”.
She added: “Co-designed by AI stakeholders, the Code is aligned with their needs. Therefore, I invite all general-purpose AI model providers to adhere to the Code. Doing so will secure them a clear, collaborative route to compliance with the EU’s AI Act.”
ICAEW Head of Tech Policy Esther Mallowah says: “Many organisations implementing generative AI rely heavily on providers of GPAI models – not just for supplying the models, but for managing model development risks and providing information to help manage downstream risks. However, some risks outside the control of consumer organisations may impact them.”
As such, Mallowah adds: “Adherence to the Code could be a helpful way for developers to demonstrate their consideration of risks and could provide an accessible way for organisations to gain some comfort over the GPAI models they implement or use.”