|

EU Delays Key AI Act Guidance: What the 2025 Postponement Means for AI Compliance

Imagine you’re building the next ChatGPT or Gemini in Europe—or you’re a compliance lead at a company racing to meet the EU’s AI Act deadlines. You’ve been waiting for clear guidance from Brussels so you don’t step on regulatory landmines, risk multi-million euro fines, or halt innovation. Suddenly, word drops: the European Commission might delay its essential AI Act guidance—specifically, the Code of Practice for general-purpose AI—by six months, pushing publication to late 2025. Now, the roadmap for compliance just got a lot blurrier.

If you’re wondering what this means for your business, your product pipeline, or the future of AI in Europe, you’re not alone. Let’s break down the implications, the revised timeline, and what AI leaders, policy teams, and innovators need to know to navigate the uncertainty.


Why the EU AI Act Guidance Delay Is Making Headlines

The EU AI Act is the world’s most ambitious attempt to regulate artificial intelligence, aiming to set a global gold standard for responsible development and use. Its phased rollout—from August 2024 to August 2027—affects every company deploying or building AI in the EU. But much of the Act’s practical impact hinges on the European Commission’s guidance documents, especially the Code of Practice for general-purpose AI models (GPAI) that power advanced systems like ChatGPT, Gemini, and more.

Originally slated for May 2025, the guidance now faces a six-month delay, potentially landing just before—or even after—the August 2025 compliance deadline for GPAI providers. This is more than a scheduling hiccup; it’s a regulatory curveball for AI businesses across the continent and beyond.

Let’s explore why this delay matters, how it fits into the broader AI Act implementation, and what you can—and should—do next.


The EU AI Act Timeline: What’s Happening, When, and Why It’s Complicated

Before we dive into the implications of the delay, let’s set the stage.

The Phased Rollout: EU AI Act Timeline at a Glance

Here’s how the EU AI Act is being implemented:

  • August 2, 2024: The AI Act officially enters into force.
  • February 2, 2025: Prohibitions on “unacceptable risk” AI systems and AI literacy obligations become effective.
  • August 2, 2025: Requirements for general-purpose AI providers (technical documentation, copyright policies, etc.), governance, and penalties come into effect.
  • August 2, 2026: High-risk AI system rules kick in—this is the end of the main 24-month transition period.
  • August 2, 2027: Final provisions for high-risk systems as safety components reach full effect.

These staggered milestones give organizations time to adapt, but they also create a regulatory relay race—one that just got trickier thanks to the delayed guidance.


The Importance of the Code of Practice for General-Purpose AI (GPAI)

If you’re building or deploying general-purpose AI (think language models, vision models, or multi-modal AI), the upcoming Code of Practice is your playbook. It’s meant to answer crucial questions:

  • What constitutes “sufficient” technical documentation?
  • How should providers inform downstream users about model capabilities and risks?
  • What copyright compliance steps are required, especially around opt-out provisions?
  • What does adequate risk management look like for models with systemic impact?

Without this guidance, companies face a compliance guessing game—where the stakes include fines of up to €40 million or 7% of global annual turnover.


Industry Pushback: Why AI Companies Are Sounding the Alarm

AI industry leaders have been vocal about the tight deadlines and the lack of granular guidance. Here’s why:

  • Technical adaptation is hard: Rolling out robust risk management, incident reporting, and cybersecurity for complex AI models takes time and resources.
  • Financial risk is real: The EU isn’t pulling any punches—non-compliance can mean catastrophic fines.
  • Regulatory ambiguity: Definitions of what counts as an “AI system” remain fuzzy, fueling fears of overreach or regulatory “AI washing.”
  • Extended transition periods are needed: Many in the AI sector argue that compliance deadlines should be aligned with the publication of practical guidance—not precede it.

This sentiment was echoed in recent reports from Digital Strategy – European Commission and ModelOp, which both highlight industry concerns about readiness and legal certainty.


What the Delay Means for GPAI Compliance: Risks and Realities

1. Legal Uncertainty for Providers and Developers

With guidance arriving late, providers must interpret key obligations themselves or risk non-compliance. Uncertainty persists around:

  • Required detail for technical documentation
  • What constitutes effective copyright compliance
  • Risk management frameworks for systemic AI models

2. Compliance Preparation Gets Harder

Most companies start preparing well before deadlines. A delay in detailed guidance means organizations must:

  • Rely on best guesses and external legal advice
  • Duplicate work if the final guidance diverges from early efforts
  • Possibly rework compliance programs, burning time and budget

3. Potential Chilling Effect on Innovation

Smaller AI players and open-source developers may hesitate to release new models or features for fear of inadvertent violations—slowing the pace of responsible AI innovation.

4. Risk of Regulatory “AI Washing”

As definitions stay murky, some vendors may “rebrand” products as non-AI to duck compliance, while others over-label to signal compliance—confusing buyers and enforcement agencies alike.


Major Compliance Requirements for GPAI Providers: What’s at Stake?

Let’s get specific. The EU AI Act imposes a new set of obligations for GPAI providers:

Core Obligations (Effective August 2, 2025)

  • Technical Documentation: Providers must detail model architecture, data sources, and training methodologies.
  • Information Sharing: Obligatory disclosures to downstream users and developers about model limitations, intended use, and risks.
  • Copyright Compliance: Robust policies to honor opt-out requests—especially for datasets scraped from the open web.
  • Training Content Summaries: Providers must publish summaries of datasets used to train the model.
  • Cooperation with Regulators: Providers are expected to respond promptly to inquiries from the AI Office or national authorities.

Extra Requirements for Systemic-Risk Models

If a GPAI model is classified as posing “systemic risks” (e.g., advanced, widely-deployed foundation models):

  • Risk Management: Proactively identify, assess, and mitigate risks from deployment and misuse.
  • Incident Reporting: Track and report serious incidents (e.g., large-scale failures, misuse) to authorities.
  • Cybersecurity: Demonstrate robust protections against vulnerabilities and attacks.

Penalties

Non-compliance can trigger fines of up to €40 million or 7% of global annual turnover—one of the toughest penalty regimes worldwide.


How Should AI Companies Respond to the Delay?

This setback is frustrating, but not a reason to stand still. Here’s how smart organizations are staying proactive:

1. Continue Preparatory Work

Build out technical documentation templates, copyright management policies, and risk management frameworks based on existing drafts and industry best practices. If you’re already working with frameworks from groups like OECD or NIST, keep iterating.

2. Monitor Official Communications

Stay glued to updates from the European Commission’s AI Office and reputable trade associations. Expect draft guidance, Q&A documents, and webinars over the coming year.

3. Engage in Industry Advocacy

If the delay is causing hardship or confusion in your sector, consider joining industry groups to push for more realistic transition periods and clearer interim guidance. The more feedback regulators get, the likelier they are to address pain points.

4. Document Your Reasonable Efforts

If enforcement bodies review your compliance efforts, demonstrating good-faith steps—even in the absence of final guidance—can make a difference. Track your policy development, training, and risk assessment work.


Navigating the “AI Washing” Phenomenon: Why Definitions Matter

“AI washing” happens when companies stretch the “AI” label to benefit from hype or, conversely, to dodge regulations. The AI Act’s definitions are under scrutiny, and until guidance lands, this could create:

  • Enforcement headaches: Regulators may struggle to distinguish between genuine AI systems and “AI-washed” products.
  • Market confusion: Buyers may not know if a product is truly regulated or just riding the AI wave.
  • Inconsistent compliance: Some firms might over-comply to play it safe, while others under-comply and risk future fines.

Until definitions are clarified, companies should err on the side of transparency—clearly documenting what their technology does (and doesn’t) do, and making risk statements public.


What About Open-Source AI? The Special Challenge

Open-source AI models present a unique compliance puzzle:

  • Who is responsible? The model’s original developer? The distributor? Each downstream user?
  • How to manage risk? Open-source models can be fine-tuned or integrated in unpredictable ways.
  • Balancing innovation and safety: The open-source community fears that heavy-handed regulation could stifle beneficial innovation.

The delayed Code of Practice is especially crucial here, as it’s expected to offer tailored guidance for open-source developers and users. Until then, the best approach is to follow transparency and responsible AI development practices as outlined by bodies like the European AI Alliance.


Enforcement: The Role of the EU AI Office

A new “AI Office” within the European Commission will oversee enforcement—evaluating models, investigating systemic risks, and coordinating with national authorities. Their mandate includes:

  • Reviewing technical documentation and risk management plans
  • Investigating incidents and breaches
  • Coordinating cross-border investigations

The delayed guidance could hamper their ability to apply uniform standards, meaning early enforcement may be uneven or subject to legal challenge.


Looking Ahead: What Could Happen Next?

Here’s what to watch for over the next 12-18 months:

  • Interim guidance: The Commission may publish Q&A sheets, draft codes, or sector-specific advice before the final Code of Practice drops.
  • Industry self-regulation: Expect more voluntary codes, frameworks, and best-practice sharing from AI industry groups.
  • Enforcement learning curve: Early enforcement actions may serve as precedent, clarifying gray areas for the rest of the market.
  • Potential further delays: If industry pushback grows, or technical challenges persist, more timeline shifts may follow.

Key Takeaways: How to Stay Ahead in an Age of Regulatory Uncertainty

  • Don’t wait for Brussels: Begin compliance prep now, drawing on drafts, international best practices, and legal counsel.
  • Prioritize transparency: Document your models’ capabilities, data sources, and intended uses—especially if you’re operating in the EU.
  • Engage and advocate: Participate in industry working groups and public consultations to shape the final guidance.
  • Monitor updates: Subscribe to Commission updates and industry newsletters for real-time news (bonus: here’s where to start).

Ultimately, this delay is frustrating—but it’s also an opportunity to get compliance right, create industry-leading best practices, and help shape the future of AI in Europe.


FAQ: EU AI Act Guidance Delay and AI Compliance

Q1. What is the EU AI Act, and why is it important?
The EU AI Act is the first comprehensive regulatory framework for artificial intelligence, aiming to ensure safety, transparency, and accountability in AI systems across the European Union. It sets out phased obligations for developers and deployers of AI, especially those building general-purpose and high-risk models.

Q2. When will the key Code of Practice for GPAI be published?
As of now, the European Commission is considering a delay, with publication expected in late 2025 instead of the original May 2025 deadline.

Q3. Does the delay mean AI companies don’t have to comply by August 2025?
No. Legal obligations for GPAI providers still take effect on August 2, 2025. Companies are expected to make reasonable efforts to comply even if final guidance is late.

Q4. What are the penalties for non-compliance under the EU AI Act?
Fines can reach up to €40 million or 7% of a company’s global annual turnover, whichever is higher.

Q5. How should organizations prepare in the absence of final guidance?
Start compliance work now, using draft documents, international standards, and legal advice. Document your efforts and stay alert for Commission updates.

Q6. Where can I find authoritative EU AI Act updates and resources?
Check the European Commission’s official AI policy page, ModelOp’s industry insights, and reputable trade groups like the European AI Alliance.


The bottom line: The EU AI Act’s delayed guidance is a headache—but proactive, transparent, and well-documented compliance will help your organization weather the uncertainty and build lasting trust. Stay ahead of the curve by preparing now and engaging with the evolving regulatory landscape.

Want more updates on AI regulation, compliance tips, and industry news? Subscribe to our newsletter or explore our latest insights on EU tech policy. The AI future is fast-moving—let’s navigate it together.

Discover more at InnoVirtuoso.com

I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.

For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring! 

Stay updated with the latest news—subscribe to our newsletter today!

Thank you all—wishing you an amazing day ahead!

Read more related Articles at InnoVirtuoso

Browse InnoVirtuoso for more!