EU AI Act
EU AI Act High-Risk Deadline Extended to December 2027
The high-risk provisions deadline moved from August 2, 2026 to December 2, 2027 following the Digital Omnibus political agreement on May 7, 2026. Here is exactly what changed, what did not, and what your organization should do now.
May 10, 2026
·9 min read
·Belto AI

Key facts at a glance
Agreement reached: May 7, 2026
Annex III high-risk systems: December 2, 2027 (was August 2, 2026)
Annex I embedded products: August 2, 2028 (was August 2, 2027)
Article 50(2) watermarking: December 2, 2026
Article 5 prohibitions: Active since February 2, 2025 — unchanged
Status: Provisional political agreement. Formal adoption expected before August 2026.
On May 7, 2026, the Council of the EU and the European Parliament reached a provisional political agreement to extend the enforcement deadline for high-risk AI system provisions under Annex III of the EU AI Act. The deadline moved from August 2, 2026 to December 2, 2027 — a 16-month extension. For AI systems embedded in regulated products under Annex I, the new deadline is August 2, 2028.
This is the most significant update to the EU AI Act enforcement timeline since the regulation entered into force in August 2024. If your organization builds, deploys, or distributes AI systems in the EU market, this change directly affects your compliance planning.
Why the deadline was extended
The extension was not a change of political will. It was a recognition of a practical problem: the compliance infrastructure needed to implement the high-risk AI provisions was not ready in time.
The EU AI Act requires organizations operating high-risk AI systems to meet obligations that depend on technical standards that were not finalized. National competent authorities across EU member states had not yet been fully designated. Conformity assessment bodies were not operational in most member states. Harmonized standards that providers need to demonstrate compliance were still in development.
In November 2025, the European Commission put forward the Digital Omnibus on AI — a targeted set of amendments designed to link the application timeline for high-risk AI rules to the actual availability of these compliance tools. The agreement acknowledges that AI Act compliance requires significant preparation and the introduction of supporting technical standards. The delay is intended to provide businesses with additional time to achieve compliance, while underscoring the expectation that implementation efforts should already be underway.
The process was not straightforward. The second political trilogue on April 28, 2026 ended without agreement after twelve hours of negotiations. The key sticking point was whether AI systems embedded in products already governed by existing EU sectoral safety legislation should be fully exempted from the AI Act or governed solely by sectoral rules — a demand driven primarily by German industry and supported by Chancellor Merz. A compromise was eventually reached at the third trilogue on May 7, 2026.
What the agreement changes
The Annex III high-risk deadline
Obligations on high-risk AI systems apply from December 2, 2027 for AI systems with a high-risk use case — including those involving biometrics, critical infrastructure, education, employment, law enforcement, essential services, migration, and the administration of justice.
This covers the full set of Annex III categories: biometric identification and categorization systems, critical infrastructure safety components, educational and vocational training systems, employment and HR management systems, systems involved in access to essential services including credit scoring, law enforcement tools, migration and border control systems, and systems used in judicial proceedings.
The Annex I embedded products deadline
AI systems embedded in regulated products have until August 2, 2028. This covers AI functioning as a safety component in products governed by EU sectoral safety legislation — medical devices, machinery, toys, lifts, pressure equipment, radio equipment, and others listed in Annex I.
Article 50(2) watermarking obligations
The grace period for Article 50(2) transparency obligations was compressed from six months to three. The new effective date is December 2, 2026. This applies to providers of AI systems that generate synthetic audio, image, video, or text content. The obligation to mark AI-generated content in machine-readable format now applies from December 2, 2026 — not February 2027 as originally proposed. If your organization deploys generative AI, this is not covered by the Annex III extension.
A new prohibition added
The co-legislators added a new provision prohibiting AI systems used to generate non-consensual sexual or intimate content and child sexual abuse material. This prohibition joins the existing Article 5 banned practices and applies upon formal adoption of the agreement.
What the agreement does not change
This is the more important list for most organizations.
Article 5 prohibited practices remain fully active. The eight categories of banned AI have been enforceable since February 2, 2025. These include subliminal manipulation, exploitation of vulnerable groups, real-time biometric identification in public spaces for law enforcement, social scoring by public authorities, predictive policing based solely on profiling, emotion recognition in workplaces and education, biometric categorization based on sensitive attributes, and facial recognition database scraping. The agreement does not affect any of these.
Article 4 AI literacy obligations remain active. All providers and deployers have been required to take measures to ensure sufficient AI literacy for their staff since February 2, 2025.
GPAI model obligations remain active. General-purpose AI model obligations under Articles 51 to 56 have applied since August 2, 2025.
All other Article 50 transparency obligations remain on schedule. Deployers using AI systems for emotion recognition, biometric categorization, or conversational AI interfaces have transparency obligations that apply from August 2, 2026. Only the Article 50(2) machine-readable content marking obligation moved.
The obligations themselves did not change. Risk management requirements, data governance obligations, technical documentation, human oversight mechanisms, conformity assessment procedures, EU database registration — none of these changed. Only the date by which they must be met changed for Annex III and Annex I systems.
The full updated enforcement timeline
| Date | What applies | Status |
|---|---|---|
| August 2, 2024 | EU AI Act enters into force | Active |
| February 2, 2025 | Article 5 prohibited practices, Article 4 AI Literacy | Active |
| August 2, 2025 | GPAI model obligations (Articles 51-56) | Active |
| August 2, 2026 | All other Article 50 transparency obligations for new systems | Upcoming |
| December 2, 2026 | Article 50(2) watermarking for AI-generated content | Upcoming |
| December 2, 2027 | Annex III high-risk systems — full obligations | Extended deadline |
| August 2, 2028 | Annex I embedded product high-risk systems | Extended deadline |
Will there be another extension?
Almost certainly not. The political argument for delay has been exhausted. The Cypriot Presidency framed this agreement as a flagship deliverable. The European Parliament secured the nudifier ban it had made a condition of agreement. Industry secured the timeline it needed for compliance standards to be finalized. All parties got something from this negotiation. There is no obvious constituency for a second extension, and significant political cost to pursuing one.
The revised provisions do not fundamentally alter the AI Act's core architecture. The regulation is intact. The obligations are intact. The fines are intact — up to EUR 35 million or 7% of global annual turnover for prohibited practice violations. The only thing that moved was the date.
What your organization should do now
The extension creates time. How that time is used will determine whether organizations reach December 2027 in a defensible compliance position or scrambling under deadline pressure.
First: determine whether the extension applies to your system. Not all provisions were extended. If your organization uses AI systems in ways that engage Article 5 prohibitions, Article 4 literacy requirements, GPAI model obligations, or Article 50 transparency obligations, those rules are already in force. The December 2027 date applies only to Annex III high-risk system obligations.
Second: do not use the extension to delay your scoping work. Determining whether your AI system falls within Annex III, understanding which obligations apply to your specific deployment context, and identifying your entity type as provider or deployer can and should be completed now. None of this work depends on harmonized standards being finalized.
Third: use the time to build compliance in rather than retrofit it. Organizations that begin compliance work now will integrate risk management, data governance, human oversight mechanisms, and technical documentation into how they build. Organizations that wait will retrofit compliance onto systems not designed with these requirements in mind. Retrofitting is more expensive and operationally disruptive.
Fourth: note the December 2026 watermarking deadline if you use generative AI. If your organization deploys AI systems that generate synthetic audio, image, video, or text content, the Article 50(2) machine-readable marking obligation applies from December 2, 2026 — seven months from now. This is not covered by the Annex III extension.
Fifth: prepare for formal adoption. The May 7 agreement is provisional. Formal adoption by both Parliament and Council is expected before August 2026. Until then, the original August 2026 deadline technically remains in force. Plan on the basis that the extension will be confirmed, but do not treat it as legally certain until the Official Journal text is published.
The extension is not a reprieve. It is a better runway.
The EU AI Act is not going away. The obligations it places on organizations deploying AI in EU markets are not going away. The fines are not going away. What changed is that the runway to build a defensible compliance posture got 16 months longer.
Organizations that treat December 2027 as a real constraint and begin structured compliance work now will be in a significantly stronger position than those that wait. The standards will be finalized. The conformity assessment bodies will be operational. The national competent authorities will be designated. When December 2027 arrives, enforcement will begin.
The regulation did not change. The obligations did not change. The deadline did.
Based on the provisional political agreement announced May 7, 2026. Subject to formal adoption. This article does not constitute legal advice.
Understand your specific obligations
The timeline above applies to the regulation broadly. Where your AI system stands depends on what it does, where it is deployed, and how your organization is involved in its development.
Belto AI is a compliance intelligence platform that maps regulatory obligations to your specific AI system and keeps your compliance posture current as regulation evolves. Organizations in hiring, financial services, healthcare, education, and other regulated sectors can use Belto AI to understand their Annex III classification, their obligations as a provider or deployer, and what they need to have in place before December 2027.
Get a compliance scanABOUT BELTO
Belto monitors global AI regulatory frameworks in real time, maps every change to your specific AI system, and produces structured compliance intelligence your legal and engineering teams can act on. No system integration required.
Request early access →