EU AI Act Amendments: New Compliance Timelines and Strategic Shifts
A Readiness Trigger Pushes Enforcement Beyond 2026
The European Commission’s newly proposed amendments to the Lei de IA da UE introduce a significant shift in how—and when—organizations will be expected to comply with high-risk AI obligations.
While the Act was originally set to apply to high-risk systems as early as August 2026, the Commission now proposes a “readiness trigger” mechanism: obligations will only begin once Brussels confirms that harmonized standards, technical specifications, and support tools are actually in place. That confirmation is expected sometime before December 2027, but the exact timing remains unknown.
What this Means in Practice:
1. Compliance timelines are delayed, not diminished
High-risk AI in Annex III (employment, credit scoring, law enforcement, etc.) will begin applying six months after the Commission’s formal readiness decision. Annex I product-based high-risk systems would apply twelve months after. If readiness is delayed, the law won’t take effect until the longstop backstops in Dec 2027 (Annex III) and Aug 2028 (Annex I).
This creates a moving compliance target—one that organizations should treat as an opportunity, not a pause button.
2. Expanded obligations highlight the centrality of data governance
The amendments reinforce what practitioners already know: trustworthy AI is fundamentally a data problem. Changes include:
- Simplificado but broadened requirements for SMEs and mid-caps
- Centralized oversight of systems built on general-purpose models
- Expanded regulatory sandboxes (including an EU-level sandbox from 2028)
- Permission to process special-category data for bias detection and correction under safeguards
- Reduced registration obligations for low-risk AI
The direction is clear: data quality, lineage, labeling, transparency, and ongoing monitoring remain core pillars of AI compliance.
3. The Omnibus reforms bring GDPR and AI governance even closer together
With updates to the definition of personal data, pseudonymization standards, DPIA expectations, breach reporting, cookie rules, and legitimate interest guidance, the Omnibus package further tightens the connection between privacy, security, and AI governance.
Organizations now need unified visibility into:
- What data they have
- How it’s used in AI systems
- Its lawful basis
- Its risks
- And how to prove that across jurisdictions
BigID’s Take: Start Building the Foundation Now
Why Delay Shouldn’t Mean Inaction
Even with delayed timelines, the organizations that will be ready for high-risk AI obligations are the ones investing today in:
- Abrangente descoberta e classificação de dados across all environments
- Data quality and integrity controls
- Governance frameworks for training data, model inputs, and output monitoring
- Clear documentation and auditability
- Automated risk assessments tied to both RGPD and AI Act requirements
BigID customers are already using unified data intelligence to get ahead of emerging AI governance frameworks across the EU, UK, US, and global regulators.
Key Takeaway: Trustworthy Data is the Foundation for Compliant AI
The AI Act amendments signal something important:
- Regulators may be adjusting timing—but not expectations.
- Trustworthy, transparent, high-quality data remains the foundation of compliant AI.
Now is the time to build that foundation.
