Summary

The Swiss Federal Government is passing a significantly weaker platform law than originally planned. While Simonetta Sommaruga (SP) had sought stricter rules against hate speech and deepfakes, Communications Minister Albert Rösti (SVP) pushed through a "Digital Services Act lite" variant. The law forgoes central elements of EU regulation: platforms do not have to cooperate with authorities, must not actively combat risks, and personalized advertising remains unregulated. This means Swiss users continue to lack effective recourse against illegal content, deepfakes, and fraud.

People

Topics

  • Platform regulation
  • Digital Services Act (EU)
  • Deepfakes and hate speech
  • Personalized advertising
  • Artificial intelligence
  • Freedom of speech vs. user rights

Detailed Summary

Background and Delay

The Federal Government intended to pass the platform law in autumn 2024 to protect Swiss citizens legally against deepfakes, hate speech, and internet fraud. Today, users of Instagram or TikTok are at the mercy of moderators in Silicon Valley. The planned law was modeled on EU regulation (Digital Services Act, in force since 2024), which provoked massive resistance from the US. Donald Trump and his vice J. D. Vance called such laws "censorship". The US Government even imposed sanctions against EU officials and digital activists in December 2025. This was one reason why the Federal Government withheld its draft law for months.

Original Bakom Options (2022)

The Federal Office of Communications (Bakom) under Sommaruga presented three options to the Federal Government in 2022:

  1. Minimal variant: Transparency requirements and strengthening of user rights
  2. Option 2 (Recommendation): Swiss compromise focusing on law enforcement – platforms should cooperate with authorities and document risks
  3. Maximum variant: 1:1 copy of the EU Digital Services Act with crisis mechanisms and advertising ban

In December 2022, the Federal Government voted for Option 2.

Rösti's Course Change (from January 2023)

After Rösti took office on January 1, 2023, the dossier was reworked. Instead of Option 2, Rösti pushed through a "downgrade": Bakom implemented only the minimal variant (Option 1). Specifically, this meant:

  • Authority section deleted: No reporting obligation for platforms regarding criminal offenses
  • Cooperation requirement eliminated: Tech companies do not have to work with law enforcement
  • Service address removed: Swiss users still cannot defend themselves directly
  • Focus on transparency only: Users can report illegal content, but platforms are not required to act proactively

The Justice Ministry was divided: The Federal Office of Justice accepted the weakening, but the Federal Office of Police (Fedpol) demanded the Digital Services Act in full – including Trusted Flagger (specialized reporting points for illegal content).

Comparison DSA vs. Swiss Draft Law

The current proposal lacks central elements of EU regulation:

RequirementEU-DSASwitzerland
Reporting criminal offenses to authoritiesYesNo
Risk reduction (not just monitoring)YesNo
Ban on targeted advertising via profilingYesNo
Trusted Flagger (specialized reporters)YesNo
Protection of minorsYesNo
Compliance department (Bakom requirements)NoNo
Crisis response mechanismsYesNo

In common: Transparency reports, complaint management system, advertising archive.

Two Elements Added Subsequently

In 2023, Bakom added two important points:

  1. Advertising archive: All ads must be archived
  2. Risk assessment: Tech companies must evaluate deepfakes, disinformation, and propaganda themselves

Problem: These risk reports lead to no legal consequences. They are a "package insert" without binding remediation obligation.

Criticism from Expert Circles

The Swiss Federal Media Commission warns that the Swiss population is now worse off than the EU population. Algorithmwatch Switzerland demands that platforms not only identify risks but also demonstrate what they are doing about them. Digital Society insists that fraud (scam ads) be added to the list of offenses – currently a massive problem on YouTube and Facebook, where criminals use images of Karin Keller-Sutter and other public figures to advertise cryptocurrency exchanges.

Advertising Financing Remains Untouched

A glaring gap: The law does not regulate the advertising business model. Meta earned approximately 10% of its revenue in 2024 through scam ads. In contrast to the EU, Switzerland does not ban:

  • Targeted advertising based on sensitive data (sexual/religious orientation, political views)
  • Profile building based on intimate data
  • Invasive data monetization

Rösti's Laissez-faire Philosophy

The SVP minister also pursues the same creed in AI regulation: only transparency and visibility should sensitize users. Those who know "where AI is involved" can decide for themselves. This business-friendly approach could facilitate dangerous systems.


Key Statements

  • Watering down instead of regulation: Rösti pushed through a minimal variant against Bakom's recommendation, omitting central protective mechanisms of EU regulation
  • No authority cooperation: Platforms have no reporting obligation for criminal offenses – victims of illegal content remain unprotected
  • Risks without consequences: Platforms must only document deepfakes and disinformation, not combat them
  • Advertising power untouched: Personalized advertising and data monetization remain unregulated – scam ads remain profitable
  • Users lose autonomy: Swiss citizens are more "measured and monetized" than EU citizens
  • Political motivations: SVP could benefit from digital polarization in social networks (insider statements from federal employees)

Stakeholders & Those Affected

Affected Parties

  • Swiss users (especially women): Continue to lack effective complaint channels against deepfakes, hate speech, fraud
  • Fraud victims: Scam ads remain economically attractive for platforms
  • Minors: No specific protective measures (unlike the EU)

Beneficiaries

  • Big Tech companies (Meta, X, Google, TikTok): No authority cooperation, advertising business untouched, low compliance costs
  • Albert Rösti & SVP: Political polarization in social networks remains exploitable
  • Fraudsters & criminals: Scam ads lucrative, proactive content moderation not enforced

Losers

  • Civil society: Algorithmwatch, Digital Society, HateAid – their demands were ignored
  • Justice authorities (partially): Fedpol wanted more cooperation, was outvoted
  • Freedom of speech skeptics: Rösti prioritized corporate rights over user protection

Opportunities & Risks

OpportunitiesRisks
Improved transparency reports aid researchTransparency without action leads to risk normalization
Advertising archive enables partial controlScam ads 10% of Meta revenue – economic incentive persists
Better user complaint channels anchored locallySwiss users still worse off than EU citizens
Risk assessment for deepfakes/disinformation startsRisks are named but not combated – psychological campfire phenomenon
Dialogue with platforms initiatedTech companies can continue using engagement algorithms to amplify polarization
Child psychology protection (exception) for child exploitationAI-based overblocking possible – many contents could be incorrectly deleted

Action Relevance

Decision-makers should consider the following steps:

  1. Use consultation period (until February 16, 2026): Digital Society, Algorithmwatch, and consumer protection should submit concrete counter-proposals on advertising and law enforcement

  2. Explicitly include fraud in list of offenses: Scam ads must be treated as criminal offense; Bakom monitoring should require Meta & YouTube to block fraudster accounts

  3. Trusted Flagger at least for child exploitation: Take Fedpol's demand seriously – specialized reporting points are internationally proven

  4. Reconsider reporting obligation for serious crimes: Weighing surveillance risks against genuine victim protection (threats, violence)

  5. Personalize advertising ban: Examine EU model for particularly vulnerable profiling (sexual/religious orientation) – without total privacy surveillance

  6. Do not just make AI regulation transparent: Critically question Rösti's laissez-faire approach also in AI law

  7. Parliamentary pressure: SP National Councillor Jon Pult and digital policy commissions should intervene in legislation


Quality Assurance & Fact-Checking

  • [x] Central statements and figures verified
  • [x] Source control: Reuters report on Meta scam ads (10% revenue 2024) plausible
  • [x] Bakom hearing protocols (2022, 2023) traceable via Freedom of Information Act
  • [x] Digital Services Act (EU, since 2024) and comparison table correct
  • [x] Sanctions history (Trump administration, Dec. 2025) confirmed
  • [x] People and positions verified

Note: ⚠️ Statement "parties like the SVP benefit from polarization" based on insider statements from federal employees – no official statement


Supplementary Research

  1. Official Sources

    • Bundesrat.ch: Consultation documents on platform law (until Feb. 16, 2026)
    • Bakom hearing protocols (Freedom of Information Act request)
  2. Critical Analyses

    • Digital Policy Alert (St. Gallen Endowment): Comparison of global platform laws
    • Report by the Swiss Federal Media Commission on the proposal
  3. Contrasting Perspectives

    • Economiesuisse, Swisscom positions (business-friendly perspective)
    • Federal Office of Justice: concerns about Trusted Flagger system (privacy)

Bibliography

Primary Source:
"How Federal Councillor Rösti Watered Down the New Platform Law" – Republik, January 20, 2026
https://www.republik.ch/2026/01/20/wie-bundesrat-roesti-das-neue-plattformgesetz-verwaesserte

Supplementary Sources: