Executive Summary
The European AI Act relies on risk assessment rather than blanket bans – an intelligent regulatory model, but with one central weakness: legal enforcement. In a discussion between journalist Simin Sadegi, AI scientist Aljoscha Brüchert, and lawyer Ulf Burmeier, it becomes clear that the problem is not missing laws, but their lack of practical enforcement against global tech corporations. Europe needs fewer new regulations, but more enforcement power.
People
- Ulf Burmeier (Co-host Lage der Nation, former judge)
- Simin Sadegi (Journalist, SR presenter)
- Aljoscha Brüchert (Scientist, DFKI)
Topics
- AI Act and risk assessment
- Legal enforcement vs. regulatory gaps
- Facial recognition in public spaces
- Copyright and AI training
- General Data Protection Regulation as spillover model
Clarus Lead
The EU has chosen a pragmatic path with the AI Act: instead of banning everything, it obligates companies to risk assessment and transparency. The problem lies not in the rules themselves, but in their enforcement. When Chinese or American corporations ignore the regulations, Europe often remains toothless – unless the business model targets the EU market. For Ulf Burmeier, former judge at the Berlin Regional Court and now journalist, it is clear: legal enforcement across borders is the central challenge, not missing legislation.
Detailed Summary
The AI Act: Intelligent Regulation with Gaps
The European approach differs fundamentally from the "Wild West" (USA) or "Wild East" (China). The AI Act does not work with blanket bans, but with a risk-based approach: high-risk applications (medicine, education, critical infrastructure) are subject to stricter requirements, while most other systems operate under lower due diligence obligations. Sanctions can amount to billions – enforcement thus ultimately through damages.
The catch: military and facial recognition for internal security are largely exempt. Burmeier criticizes precisely this as a central weakness – comprehensive facial recognition in public spaces drastically changes social coexistence, but remains too loosely regulated in the AI Act.
The Enforcement Problem
Here Europe faces its real dilemma. A German court can condemn a British or American company – but enforcing the decision in Russia or China is practically impossible. Conversely: if the EU compels Chinese or American firms to comply, but cannot enforce it, the regulations become meaningless. The only real leverage then is market access and damages lawsuits.
This works because large tech corporations want to do business with EU customers. Like the General Data Protection Regulation (GDPR), Europe's standards radiate globally – not out of love, but out of profit calculation.
Concrete Pitfalls: What Protects Citizens?
The AI Act does not protect through strict bans, but through forced action. A company that uses a US language model, retrains it, and then offers it as a tax advice tool becomes itself a "provider" – and suddenly must disclose what training data it used. Except: it doesn't know itself, because it's based on OpenAI models. This creates legal uncertainty and forces action – exactly as intended.
Yet in tragic cases like teenage suicides following chatbot interactions, we see how thin liability becomes. The child bore responsibility, the corporation hides behind terms of service liability exclusions, and in the end parents can only claim funeral costs.
Copyright: The Train Has Left the Station
A particularly sharp point: large language models were trained on millions of copyrighted works – probably illegally. Yet nobody wants to "pull the plug" on OpenAI because competitiveness is the higher good. Burmeier is unvarnished: the law was already violated, but it is not being enforced. Labeling requirements for AI-generated content could help, but change nothing about the core question of why artists are not compensated.
Key Statements
Regulation alone is not enough: The AI Act is well designed, but worthless without enforcement. Europe needs not more regulations, but more bite.
Market power instead of morality: Companies comply with EU rules because they want to make money with EU customers – not for ethical reasons.
Facial recognition is the central risk: Mass surveillance in public spaces is a totalitarian power technique. The AI Act regulates this too weakly.
Autonomous weapons systems are a red line: Autonomous military AI decisions must not exist – this requires human control.
Copyright was violated and remains unpunished: AI training on stolen works was and is illegal. Lack of enforcement leads to cultural and economic damage for creatives.
Critical Questions
1. Evidence/Data Quality: How reliable is the statistic that 90% of AI systems are "completely unproblematic"? Who determined this figure and by what criteria?
2. Conflicts of Interest: The EU promotes technological competitiveness on one hand, regulates on the other. How can it simultaneously protect OpenAI and hold it accountable?
3. Causality/Alternatives: Do strict regulations actually lead to startups fleeing, or are financing and ecosystem the real brakes – as Burmeier argues?
4. Feasibility/Risks: If Chinese and US firms ignore EU rules and face no sanctions, why should European startups comply – then?
5. Edge Cases: When does an IT system with "a little bit of glitter" through a language model itself become an AI system with all regulatory consequences – and who decides that?
6. Enforcement in Multilateral Context: How can the EU sanction Chinese facial recognition or Russian disinformation AI if these countries ignore European judgments?
7. Cultural Damage: Does Europe lose its creative industries if AI training remains legal without compensation – and should that be grounds for stricter laws or more enforcement of existing copyright?
8. Military AI: Why does the AI Act exclude autonomous weapons systems outright rather than regulating them – isn't a ban more honest and enforceable?
Further News
- Legal History Shows Flexibility: The Federal Constitutional Court invented the fundamental right to "informational self-determination" in 1983 – not present in the Basic Law, but derived from Articles 1 and 2. This shows that courts can develop law further without new legislation being necessary.
Source Directory
Primary Source: [AI and Now – Episode with Ulf Burmeier] – https://storage01.sr.de/podcast/SR3_KUJ_P/SR3_KUJ_P_41533_20260213_164103.MP3
Organizations Involved:
- Saarland Broadcasting (SR)
- German Research Center for Artificial Intelligence (DFKI)
- Society for Freedom Rights (GFF)
Verification Status: ✓ 2026-02-20
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-checking: 2026-02-20