AI CERTS
9 hours ago
EU’s Next Move in AI Governance: Decoding ‘AI Act 2.0’

Consequently, compliance officers and investors must track a dispersed but influential package of measures.
Moreover, strict penalties of up to seven percent of global turnover already loom for prohibited systems.
Meanwhile, rights-holder coalitions argue that generative model transparency remains inadequate.
In contrast, major developers welcome phased guidance that promises legal certainty and market stability.
This article unpacks the moving parts, assesses remaining gaps, and outlines practical steps for corporate strategy.
Therefore, professionals can navigate Europe’s next regulatory wave with confidence and foresight.
Furthermore, venture capitalists scrutinise Brussels’ decisions, fearing over-regulation could divert the continent’s modest USD 10.8 billion AI investment abroad.
Shifting EU Policy Landscape
Initially, the AI Act entered force in 2024, but only its first obligations became applicable in February 2025.
Subsequently, the Commission published timelines that push general-purpose model duties toward August 2025 and beyond.
Additionally, national authorities are setting up AI offices that will supervise market surveillance and coordinate cross-border investigations.
Nevertheless, additional Legislation does not exist; instead, the executive relies on delegated acts and soft-law instruments.
Therefore, practitioners describe this incremental approach as phase two of EU AI Governance, despite lacking a branded statute.
These milestones demonstrate rapid regulatory layering without a fresh bill.
Consequently, model obligations deserve closer inspection next.
Generative Model Obligations Explained
Generative Models sit at the heart of the Commission’s new guidance, labelled General-Purpose AI or GPAI.
Moreover, providers must publish model cards, disclose training data summaries, and assess systemic risks before deployment.
Under AI Governance principles, the guidance offers a “presumption of conformity” comparable to harmonised standards under product Legislation.
In practice, many compliance officers call this layered approach “living AI Governance” because updates arrive quarterly.
Consequently, failure to meet these disclosures can trigger fines reaching €35 million or seven percent of global revenue.
- Risk assessment covering misuse and emergent behaviours
- Training data transparency summaries for rights holders
- Energy efficiency reporting for sustainability metrics
- Robust red-team testing before market launch
The obligations transform vague ethical talk into concrete checklists.
However, their legal weight still depends on future delegated acts.
Evolving AI Liability Frameworks
While the AI Liability Directive vanished from the 2025 work programme, liability debates refuse to fade.
Moreover, the revised Product Liability Directive now covers software and AI services, imposing strict responsibility for defective outputs.
Experts note that coherent AI Governance also requires evidentiary disclosure tools for claimants.
Academic analyses stress that Generative Models complicate causation because emergent behaviours obscure fault attribution.
Therefore, lawyers predict more reliance on probabilistic evidence, such as anomaly logs, to satisfy burden-of-proof relaxations.
Nevertheless, AI Governance goals demand workable remedies, prompting calls to revive harmonised fault-based rules.
Cambridge research outlines four archetypal harm scenarios, ranging from copyright infringements to catastrophic misinformation in healthcare chatbots.
Europe strengthened strict product liability yet left fault rules in limbo.
Therefore, creators have amplified their criticism, as the next section shows.
Creators Voice Ongoing Concerns
Authors, musicians, and film guilds argue that training data disclosures remain opaque, undermining fair compensation.
Axel Voss warns of a “legal gap” that he believes favours large platforms over European talent.
Meanwhile, coalitions representing seven percent of EU GDP demand Legislation forcing Generative Models to reveal full datasets.
Moreover, campaigners urge the Commission to mandate dataset registries that list copyrighted works used during model training.
Professionals can enhance their policy expertise with the AI+ Ethics™ certification, which covers transparency tooling and governance audits.
Consequently, these stakeholder pressures may define the cultural dimension of upcoming AI Governance debates.
Creative industries want stronger leverage over dataset access and consent.
In contrast, industry champions focus on practical compliance paths, reviewed next.
Industry Compliance Playbook Guidelines
Large model providers publicly support the GPAI Code of Practice, seeing it as a shield against fragmented enforcement.
Furthermore, legal teams map each checklist item to internal controls to demonstrate AI Governance maturity during audits.
- Global AI investment in 2024: USD 124.9 billion
- EU private AI spend: USD 10.8 billion
- Cultural sector share of EU GDP: 7%
Subsequently, several cloud vendors launched turnkey compliance dashboards that map internal telemetry to each GPAI requirement.
Moreover, providers argue that clearer Liability safe harbours are essential for open-source Generative Models to flourish in Europe.
Insurance firms have started pricing model risk, offering premium discounts when companies adopt documented mitigation strategies.
Therefore, aligning business risk management with AI Governance metrics becomes a board-level imperative.
Industry sees compliance as competitive differentiator and investment magnet.
Nevertheless, future regulatory tweaks could reshape that playbook, as the final section explores.
Future Regulatory Scenarios Ahead
Policy watchers anticipate another legislative package after the 2026 Commission mandate begins, possibly reviving fault-based Legislation.
Meanwhile, technical standards for watermarking and machine-readable labels will decide how enforcement scales across borders.
Additionally, any new AI Governance draft will likely tighten dataset disclosure duties for Generative Models in critical sectors.
Consequently, companies should monitor pilot court cases that interpret the revised Product Liability Directive alongside emerging standards.
Researchers from standards bodies are also exploring cryptographic attestation to prove model provenance across the supply chain.
National courts will likely reference early jurisprudence from consumer protection cases involving autonomous vehicles to interpret algorithmic damages.
In contrast, some policymakers propose regulatory sandboxes that grant temporary waivers, encouraging experimentation under supervisory oversight.
Regulatory direction remains dynamic but signals tougher transparency and accountability.
Therefore, forward planning today prevents costly surprises tomorrow.
Conclusion And Next Steps
Europe’s second wave of AI rule-making may feel fragmented, yet its trajectory is unmistakable.
Moreover, phased obligations, strict product liability, and voluntary codes already shape procurement decisions.
Consequently, compliance teams must synchronise legal, engineering, and policy functions before August deadlines materialise.
Additionally, creators’ advocacy will probably drive tougher dataset disclosure standards in upcoming legislative cycles.
Meanwhile, courts will clarify liability thresholds through early test cases, offering valuable precedent.
Nevertheless, firms that embed transparency tooling today will enjoy first-mover trust advantages.
Leaders should allocate budget for continuous monitoring, external audits, and staff upskilling.
Professionals eager to deepen their policy insight can secure competitive edge by earning the AI+ Ethics™ credential.
Finally, subscribe to our newsletter for fresh analysis, and position your organisation ahead of Europe’s rapidly evolving AI market.