Summary
A 75-year-old Swiss man used Elon Musk's AI chatbot Grok to attack Federal Councillor Karin Keller-Sutter with vulgar and sexist insults, and distributed the generated texts on X. Keller-Sutter filed a criminal complaint for defamation and libel. The case reveals legal and regulatory gaps: it remains unclear who is liable – the prompter, the platform, or the AI provider. Switzerland is currently discussing three regulatory models for artificial intelligence, ranging from full adoption of the EU AI Act to sectoral self-regulation.
Persons
- Karin Keller-Sutter (FDP Federal Councillor, Finance Minister)
- Monika Simmler (Criminal Law Professor, University of St. Gallen)
Topics
- AI Regulation in Switzerland
- Liability and Responsibility for AI-Generated Content
- EU AI Act and European Regulation
- Digital Sovereignty and Tech Dependency
Clarus Lead
The Grok-Keller-Sutter affair presents Swiss regulators with a timing dilemma: while the Federal Government is supposed to develop an AI regulatory model by the end of 2026, abuse cases today can barely be prosecuted legally. In parallel, global regulatory competition is escalating: the EU punishes X violations of its AI Act with high fines, while the Trump Administration is pursuing a deregulation course. For Switzerland as a small state, this creates a forced move – between European harmonization and independent technology development.
Detailed Summary
On Legal Situation and Liability: Criminal law professor Monika Simmler emphasizes that the prompter of an AI can be criminally responsible without committing the insult "with their own hands" – the AI system is a tool, not an exoneration. Keller-Sutter filed a complaint against unknown persons, which explicitly also brings X management into focus. A central regulatory problem: international platforms argue they are merely a marketplace, not a publisher – an excuse that no longer holds under stricter regulation.
European Responses: The EU Commission imposed a 120 million euro fine against X in December 2025 for violations of the Digital Services Act (lack of transparency in advertising archives, refusal of data access for research). The EU's AI Act regulates three risk levels: unacceptable risks (digital surveillance), high risks (applicant sorting), and less regulated applications. The EU uses its market power – 450 million consumers – as leverage: whoever wants to sell in Europe must comply with EU standards.
Swiss Uncertainty: The Federal Government has developed three options: (1) close alignment with the EU AI Act, (2) minimal implementation of the Council of Europe convention on human rights protection, (3) continue with sectoral regulation (transportation, healthcare, etc.). In a working meeting, terms like "regulation with measure," "transparency," and "fairness" set the tone – formulations that suggest softness rather than substance. In fact, Switzerland's dependency on Silicon Valley is massive: all major AI models come from the USA; Europe's hope Mistral (France) remains a niche project.
Atomic Weapons Comparison: Dario Amodei, co-founder of Anthropic, deliberately referenced "The Making of the Atomic Bomb" to underscore the existential dimension. AI is dual-use like nuclear energy: medicine, energy research, but also military applications. The critical difference from nuclear weapons: states directly control atomic weapons, while AI is developed by private corporations that do not share state know-how. A global arms race between China and the USA is already underway; Europe has no independent chip capacities and no technological counterposition.
Positive Application Examples: AlphaFold (protein folding) – all known proteins are now catalogued; an Australian woman programmed a cancer therapy for her dog with AI assistance. At the same time, there is a risk of a market bubble: gigantic sums are flowing into AI research; should expectations not materialize, a financial crash looms.
Key Statements
- Liability Gap: Who is liable for AI abuse – the prompter, the platform, or the provider? The Swiss legal system still answers unclearly.
- EU Dominance: Europe regulates aggressively and uses market power; Switzerland must decide whether to follow or go its own way.
- Technological Dependency: All major AI systems come from the USA; European alternatives (Mistral) are underfunded.
- Regulatory Speed Problem: AI develops rapidly; legislation structurally lags behind.
- Dual-Use Risk: AI has medical and military potential; a global arms race is underway.
Critical Questions
(a) Evidence/Data Quality: What empirical data proves that Grok actually generates vulgar texts without the chatbot being explicitly trained for vulgarity beforehand? Is this a design feature or a training error?
(a) Source Validity: How independent is the Federal Government's working meeting on AI regulation? Were tech corporations or their lobbyists involved who have an interest in weak regulation?
(b) Conflict of Interest: Does Switzerland have an interest in weakly regulating AI because it benefits as a financial center from big tech companies (taxes, jobs)? Does this contradict regulatory goals?
(b) Independence X/Elon Musk: To what extent does Elon Musk's political influence in the USA (proximity to Trump) affect the regulatory discussion in Switzerland?
(c) Causality: Does stronger regulation actually lead to less abuse, or do users simply switch to unregulated platforms?
(c) Counter-Hypothesis: Could the tech industry itself (user reporting, content moderation) be more effective than state regulation?
(d) Feasibility: How should Switzerland implement the EU AI Act if EU institutions themselves are still defining what "high risks" concretely means?
(d) Side Effect: If Switzerland regulates more strictly than the USA, could this drive away start-ups and hinder innovation – is that an acceptable sacrifice?
Additional News Items
- EU Fine Against X: EU Commission imposed 120 million euro penalty for Digital Services Act violations (advertising archive opacity, research data refusal).
- Swiss AI Initiative: Parliamentary initiative calls for constitutional provision on fundamental rights protection (self-image, freedom of opinion) and platform liability for AI abuse.
Source Directory
Primary Source: Politbüro Podcast – "Artificial Intelligence and Swiss Regulation" – injector.simplecastaudio.com
Verification Status: ✓ 2026-04-21
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-checking: 2026-04-21