Summary

Angela Müller, executive director of AlgorithmWatch Switzerland, argues in an SRF radio interview that deepfakes are not the actual problem, but rather a symptom of societal ills such as sexism and misogyny. The technology enables sexualized violence, identity abuse, and fraud at the push of a button. The Federal Council has initiated regulation of communication platforms, but according to Müller, it is too mild and does not adequately address generative AI applications.

Persons

  • Angela Müller (Executive Director AlgorithmWatch Switzerland)
  • David Karasek (SRF Moderator)

Topics

  • Deepfake technology
  • Sexualized violence online
  • Platform regulation
  • Artificial intelligence and society
  • Cybercrime

Clarus Lead

The central tension lies in the political response: While Federal Councillor Karin Keller-Sutter and several female influencers in Switzerland became victims of deepfake pornography, Müller criticizes the planned regulation as insufficient. It primarily targets content distribution through platforms, but does not capture integrated AI tools with which users can directly create and distribute deepfakes. This reveals a regulatory gap that grows with increasing technological convergence.

Detailed Summary

Müller emphasizes that sexualized deepfakes are not a new phenomenon – sexualized violence has existed for a long time. However, the technology radically lowers the threshold: a harmless photo becomes a pornographic image in seconds via an AI program, distributed at millisecond speed in global networks. Particularly brutal for victims is the helplessness in deletion; once uploaded, the material takes on a life of its own that is factually impossible to stop.

In identifying actors, Müller identifies a multi-level responsibility system: perpetrators use specialized apps (some with the slogan "With us you can undress anyone"), app stores distribute them, and social media platforms as well as search engines amplify the spread. Recently, this line has become blurred – platforms are building AI generators directly into their services. The planned Swiss regulation focuses on distribution platforms, not on generative tools. This is strategically insufficient given increasingly integrated ecosystems.

Müller rejects deterministic arguments: regulation is not impossible because technology is borderless – as with organized crime: implementation hurdles do not justify the failure to enact laws. One approach is to require app stores not to offer such applications. EU regulation could serve as a benchmark for Switzerland.

Key Statements

  • Deepfakes are a catalyst for existing societal ills (sexism, sexualized violence), not their cause
  • The planned Swiss platform regulation fails to capture integrated AI generators and risks failure as a result
  • Multi-level responsibility is distributed across app developers, app stores, platforms, and users – all must be held accountable

Critical Questions

  1. Evidence: Müller claims deepfake pornography is a "massive problem" – are there quantitative data on prevalence in Switzerland, or are these qualitative observations?

  2. Conflict of Interest: To what extent might AlgorithmWatch as an NGO with an advocacy mandate have an interest in maximizing the threat posed by AI to justify regulatory measures?

  3. Causality: Müller separates technology from sexism. But could it not be the case that technology – by lowering thresholds – also activates new perpetrators who would not become active without it (activation effect rather than just catalyst)?

  4. Feasibility: How realistic is it that app stores in Switzerland will block such apps when users can circumvent them through web access or sideloading? Where is the practical enforcement limit?

  5. Alternative Regulation: Müller criticizes Swiss regulation as too weak. But could a focus on strengthening criminal law (e.g., a new article on non-consensual deepfakes) not be more effective than platform compliance, which is scarcely enforceable internationally?

  6. Societal Response: Müller calls for prevention and "normalization reduction" – how should these be concretely operationalized? Curricula? Campaigns? Who bears the costs and responsibility?


Source Index

Primary Source: Tagesgespraech_radio_AUDI20260428_NR_0022 – SRF Audio (download-media.srf.ch)

Verification Status: ✓ 2026-04-28


This text was created with the support of an AI model. Editorial responsibility: clarus.news | Fact-checking: 2026-04-28