Executive Summary
The Swiss media industry presented a binding code of conduct for handling artificial intelligence on Thursday at the Swiss Media Forum in Lucerne. The code was developed by the Swiss Publishers Association (VSM), SRG SSR, and news agency Keystone-SDA and is to be implemented by year-end. Central elements include transparency obligations, employee training requirements, labeling requirements for AI-generated content, and the establishment of independent ombudsman offices. The Advertising Media Research Institute (WEMF) is additionally introducing a "Responsible AI" audit and certification.
People
- Andrea Masüger (VSM President)
- Hanspeter Kellermüller (Keystone CEO)
Topics
- Artificial Intelligence in Media
- Media Regulation and Self-Regulation
- Journalistic Ethics
- Transparency and Labeling Requirements
Clarus Lead
The Swiss code signals a pioneering role in proactive self-regulation of AI in the media industry, while Germany only responded after a ZDF scandal. This Swiss initiative sets a European benchmark before the EU AI Act makes mandatory labeling requirements binding in August 2026. For media professionals and regulators, what matters is: Switzerland demonstrates that industry consensus without state coercion is possible—a model that could come under pressure if individual companies undercut standards.
Detailed Summary
The Swiss code is based on four principles: User knowledge, protection of democratic processes, data protection, and transparency. On the operational side, media companies commit to training employees who use AI systems and to special protection of editorial content and confidential data. The public should be informed on company websites about which AI systems are being used.
Labeling requirements are central: Fully AI-generated as well as unverified published content (texts, images, audio) must be clearly marked. Content edited or created with AI requires fact-checking and, if necessary, labeling. Interactive AI systems (chatbots) that could be confused with users are also subject to this requirement. In parallel, companies install internal AI reporting offices and an independent ombudsman office that publishes annual reports.
Germany, by contrast, does not yet have an industry-wide code. ARD, ZDF, and Deutschlandradio agreed in 2025 on a "human in the loop" approach that guarantees journalistic responsibility through human involvement. This self-commitment followed in part from the debacle at ZDF, where an AI-generated film clip in a heute-journal piece led to the recall of the US correspondent. The scandal underscores implementation gaps between code and practice. Internationally, the German Journalists Association (2023) and the European Federation of Journalists (2025) have also adopted AI principles. As of August 2, 2026, the EU AI Act will make AI labeling legally binding across Europe.
Key Statements
- Swiss Alliance: VSM, SRG SSR, and Keystone-SDA establish industry-wide self-regulation instrument before state coercion
- Four Pillars: User knowledge, democracy protection, data protection, transparency
- Implementation Pressure: Control through internal reporting offices, independent ombudsman office, and WEMF certification
- Labeling Requirement: AI-generated and AI-edited content must be declared
- European Context: Swiss code creates de-facto standard before EU AI Act 2026
Critical Questions
Source Validity: What concrete sanction mechanisms does the code have for violations—beyond the ombudsman office—and how are these enforced?
Conflicts of Interest: To what extent can self-regulation, which is shaped by major publishers (VSM), simultaneously work against overuse of AI for cost-cutting purposes?
Causality: Has the German "human in the loop" code actually failed, or was the ZDF scandal an isolated case of poor editorial management?
Feasibility: What will annual ombudsman report oversight look like in practice if no fines are provided—only reputational risk?
Data Protection Reality: How concretely will "editorial content and confidential data" be protected in AI use through cloud services without naming technical specifications?
Timing Effect: Does the Swiss code come early enough to actually pressure the EU, or will it be obsolete anyway due to the EU AI Act 2026?
Industry Consensus Limits: Which Swiss media companies have NOT signed the code, and why?
Sources
Primary Source: Swiss Media Industry Adopts AI Code – heise.de https://www.heise.de/news/Schweizer-Medienbranche-gibt-sich-einen-KI-Kodex-11288213.html
Supplementary Sources (cited in text):
- Council of Europe: Convention on Artificial Intelligence
- EU AI Act (Effective: August 2, 2026)
- German Journalists Association: Paris Charter on AI and Journalism (November 2023)
- European Federation of Journalists: AI Principles (November 2025)
Verification Status: ✓ 2025
This text was created with the assistance of an AI model.
Editorial Responsibility: clarus.news | Author (Original): Tom Sperlich