Summary

The EU AI Act, in force since 2024, enters its decisive implementation phase in 2026. High-risk AI systems must be registered; transparency mandates for Foundation Models take effect in August 2025. The European Data Protection Board and the European Commission are at odds over regulatory sandboxes and self-assessments instead of regulatory oversight. Fines up to 35 million euros or 7% of annual turnover are intended to enforce compliance. The central question remains: Do self-regulation and technological innovation promote growth, or do they open the door to loopholes?

People

  • Anu Talis – Chair of the European Data Protection Board

Topics

  • EU AI Act implementation
  • High-risk AI systems
  • Foundation Models and transparency
  • Regulatory sandboxes
  • Fines and compliance

Detailed Summary

The EU AI Act stands at a critical turning point in 2026. Since February 2025, real-time biometric categorizations have been prohibited. Foundation Models – massive AI models powering chatbots to image generators – have been required since August 2025 to disclose training data summaries and risk assessments. By January 2026, 15 such models had already been reported to regulators.

On 20 January 2026, the European Data Protection Board and the Data Protection Supervisor issued a joint statement on the European Commission's Digital Omnibus Proposal. They support regulatory sandboxes at the EU level to promote SME innovation. However, they draw red lines: the registration requirement for high-risk AI systems must not be eliminated, as this would undermine accountability.

Chair Anu Talis emphasizes that administrative changes must not jeopardize fundamental rights protection, especially when data protection authorities are supposed to actively participate in sandboxes. A central conflict: the EU prefers nationally defined risk categories; Big Tech lobbies for self-assessment by companies themselves.

In Q1 2026, member states recorded 50 violations with fines of up to 250 million euros due to deficiencies in Foundation Models. The Irish Data Protection Authority manages 60% of these cases, as many tech companies are based in Dublin. Italy leads by example with National Law 132-2025 and sets implementation targets for sanctions and training by October.

The European Commission postpones some high-risk obligations from August 2026 to December 2027 – a victory for Big Tech lobbyists. A critical paradigm shift: instead of regulatory classification, companies now decide on their own risk category. Critics like Nick Kerenos call this a "real game changer," as organizations are now fully responsible for their own compliance.

Fines of up to 35 million euros or 7% of annual turnover threaten in the most serious violations. Even e-commerce platforms with chatbots or dynamic pricing must now conduct audits. Transparency mandates take effect on 2 August.


Core Messages

  • High-risk registration remains mandatory – Data protectors defend this key requirement against lobby pressure
  • Self-assessment instead of regulatory oversight – Companies increasingly classify their own AI systems
  • Foundation Models must be transparent – Training data and risks must be disclosed
  • Penalties reach record heights – Up to 7% annual turnover for serious violations
  • Regulatory sandboxes as compromise – Intended to promote innovation, but under data protection supervision
  • Implementation deadline postponed – Complex requirements delayed until December 2027
  • SMEs must build AI literacy – Training for HR tools, lending algorithms and other high-risk systems is mandatory

Stakeholders & Affected Parties

StakeholderPositionInterest
Tech companies (Big Tech in Dublin)Lobby for self-assessmentLess regulatory oversight
European Data Protection BoardAdvocates clear registration requirementsFundamental rights protection and accountability
SMEs and startupsDemand sandboxes and less bureaucracyFaster innovation with safety guarantees
National regulatorsVary between strict and liberalFragmented implementation vs. EU harmony
ItalyPioneer with national lawNational control over AI risks
E-commerce platformsMust perform new compliance auditsOperating costs increase

Opportunities & Risks

OpportunitiesRisks
European expertise in responsible AICompanies circumvent rules through self-assessment
Regulatory sandboxes promote safe innovationFragmented national implementation hampers competition
Global standard for AI governanceBrain drain to less regulated markets
Fundamental rights protection through transparencyCompliance costs burden SMEs in particular
Clear liability rules for AI damageDelayed implementation until 2027 weakens credibility

Action Relevance

For corporate management:

  • Create compliance roadmap by Q4 2026 (high-risk registration, transparency mandates)
  • Build AI literacy programs for staff working with HR, lending or scoring systems
  • Document audit processes for Foundation Models

For regulators:

  • Harmonize national classifications to avoid fragmentation
  • Equip data protection authorities with resources for sandbox oversight
  • Enforce rigorous audits on self-assessments

For policymakers:

  • Balance innovation and protection – do not let Big Tech lobbying dominate
  • Enforce sanctions credibly (150+ million euros in fines to date)

Quality Assurance & Fact-Checking

  • [x] Key milestones (February 2025 biometric ban, August 2025 transparency, January 2026 notifications) verified
  • [x] Unconfirmed data marked – ⚠️ Exact figures on 50 violations and 250 million euros require validation by independent sources
  • [x] Roles of Anu Talis and Nick Kerenos verified – statements extracted from primary source
  • [x] December 2027 postponement and fine amounts documented as factual from transcript
  • [ ] ⚠️ Detailed case numbers per member state and specific tech company violations require external research

Additional Research

  1. Official EDPB communications – Statement of 20 January 2026 on the Digital Omnibus Proposal
  2. European Commission – AI Act Progress Reports – Implementation status Q1 2026, member state compliance
  3. Irish Data Protection Commission (DPC) Annual Report 2025–2026 – Foundation Model violations, Dublin companies

References

Primary source:
Cabinet Podcast – Episode 01/26/2026 (Audio transcript on EU AI Act implementation)
https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/69589368/cabinet_01_26_2026.mp3

Supplementary sources:

  1. European Data Protection Board (EDPB) – Statement on the Digital Omnibus Proposal, January 2026
  2. European Commission – EU AI Act: Implementation Guide & Compliance Timeline
  3. Irish Data Protection Commission – Enforcement & Fines Report 2025–2026

Verification status:
✓ Factual milestones from transcript verified on 26 January 2026
⚠️ Fine amounts and case figures require external validation


Footer (Transparency Notice)


This text was created with assistance from Claude.
Editorial responsibility: clarus.news
Fact-checking: 26 January 2026
Transcript ID: 172 | Transcript length: 3,320 characters