Summary
The Swiss media industry presents a new AI Code at the Swiss Media Forum in Lucerne to establish uniform regulation of artificial intelligence in journalism. The code is based on four principles: user knowledge, protection of democratic processes, data protection, and transparency. All AI-generated content must be labeled; media companies remain responsible for their publications—regardless of their origin. Leading organizations such as the Swiss Media Publishers Association (VSM), SRG, Keystone-SDA, and Ringier support the initiative. An independent ombudsman office with annual transparency reports will be established for oversight.
Persons
- Andreas Masüger (VSM President)
- Susanne Wille (SRG Director General)
- Hanspeter Kellermüller (CEO Keystone-SDA)
Topics
- Artificial Intelligence in Media
- Journalistic Ethics and Standards
- Media Regulation and Self-Regulation
- Transparency and Consumer Trust
- Deepfakes and Media Literacy
Clarus Lead
The initiative signals that the Swiss media industry is proactively regulating AI while global debates over algorithms and synthetic content intensify. The code addresses a central trust issue: users must be able to distinguish what comes from humans or machines—particularly with deepfakes and manipulated visual content. With the ombudsman office and the planned WEMF quality label, Switzerland positions itself as a pioneer in responsible AI use, while other countries are still working on regulatory frameworks.
Detailed Summary
The code requires media companies to enforce strict labeling requirements. AI-generated acoustic or visual content that falsely represents persons or objects as authentic—so-called deepfakes—must be labeled and may not be published if illegal. All content created or edited with AI requires editorial review before publication. Fully automated, unchecked AI-generated content must also be labeled. Chatbots and interactive AI systems that could be confused with users require clear marking.
The regulation is supported by a broad alliance: in addition to VSM, also Médias Suisses, Media Svizzeri, the Association of Swiss Private Radio Stations (VSP), the Association of Swiss Private Television Stations (VSPF), Telesuisse, and the MAZ Institute. The two-tier control system consists of internal company AI reporting offices as the first point of complaint and an independent ombudsman office that publishes annual transparency reports. The code is designed as a living document and will be regularly updated. In parallel, WEMF is developing a "Responsible AI Audit" and quality label for certifying responsible AI use. Individual media companies such as Ringier (which owns Blick) have already implemented strict AI guidelines since 2023 that, for example, prohibit photorealistic AI photos in news sections.
Key Messages
- The AI Code establishes uniform standards for all Swiss media companies; responsibility for content remains with publishers.
- AI-generated and manipulated content must be clearly labeled; deepfakes are generally prohibited.
- A two-tier control system with internal company reporting offices and an independent ombudsman office ensures compliance.
- The regulation is based on the Council of Europe Convention on AI and positions Switzerland as a regulatory pioneer.
Critical Questions
Enforcement & Sanctions: The code defines labeling requirements and ombudsman office procedures—but what sanctions threaten violations? Does the ombudsman office have enforcement authority or only advisory competence?
Technical Detectability: How do editorial offices ensure that subtle AI manipulations (e.g., minimal image editing) are detected? Are there training requirements or technical standards for verification?
Conflicts of Interest in Self-Regulation: The initiative is backed by large publishers like Ringier—can smaller, independent media outlets develop different practices without fearing competitive disadvantages?
International Coherence: Swiss media compete with global platforms and international publishers that apply different standards. How does the code prevent competitive distortion?
Definitional Boundaries: What exactly counts as "AI-generated"? If a journalist uses a text tool (e.g., spell-check, auto-complete), does that require labeling?
Audit Credibility: WEMF develops the quality label itself. How is external, independent review ensured to exclude conflicts of interest?
Source References
Primary Source: Trust Through Responsibility: Swiss Media Industry Agrees on AI Code – https://www.blick.ch/digital/vertrauen-durch-verantwortung-schweizer-medienbranche-einigt-sich-auf-ki-kodex-id21928701.html
Verification Status: ✓ 2024
This text was created with the assistance of an AI model. Editorial Responsibility: clarus.news | Fact-Check: 2024