Executive Summary
The FDPIC takes a position on the risks of generative AI on the occasion of International Data Protection Day 2026. Core thesis: The current Data Protection Act is technology-neutral and directly applicable to AI applications – provided that personal data is processed. The focus is on transparency as a prerequisite for informational self-determination in the fields of health, education, and work. Recent case closures demonstrate practical implementation: Coop video surveillance as data protection compliant, X/Grok with right of objection for users.
Persons
Topics
- Generative Artificial Intelligence
- Data Protection Act (DSG)
- Informational Self-Determination
- Transparency Requirements
- AI in Video Surveillance
- AI Training with Personal Data
Clarus Lead
Switzerland regulates the use of generative AI not through new laws, but through direct application of the Data Protection Act that has been in force since September 2023. This is the central message of the FDPIC for International Data Protection Day 2026. The critical question is: Are personal data being processed? If yes, the DSG applies immediately – regardless of whether an algorithm or a generative AI model is involved. This creates legal certainty, but presents organizations and authorities with practical implementation challenges.
Clarus Performance
Clarus Research: The FDPIC closed three central cases on generative AI between March and October 2025 (Coop video surveillance, X/Grok training, DSG principle update). These demonstrate that the authority already subsumes AI applications under existing law – without waiting for new regulations.
Classification: The technology-neutral approach of the DSG is an advantage for legal certainty, but also a burden for implementation practices. Organizations must independently review which AI scenarios fall under data protection and which transparency measures are necessary.
Consequence for Decision-Makers: Organizations in health, education, and work must immediately review their AI governance. Critical areas are: (1) transparency regarding AI use toward affected persons, (2) documentation of data flows in AI pipelines, (3) objection mechanisms for AI training.
Detailed Summary
Legal Framework: DSG Directly Applies to AI
The Federal Data Protection Act (DSG) does not distinguish between "traditional" data processing and AI-based processing. This equal treatment is deliberately chosen: the legislator did not want to write new rules with every technological cycle. As long as no personal data is involved, AI research or basic AI development does not fall under the DSG – and is not within the FDPIC's jurisdiction. However, as soon as personal data is processed, the DSG applies immediately.
This principle has an immediate practical effect: there is no "AI grace period" and no "experimentation space" outside data protection. Every organization that feeds customer data, employee data, or health data into an AI system – for analysis, training, or decision-making – must comply with DSG rules.
Transparency as Core Obligation
The FDPIC emphasizes transparency as a central principle. This is not merely a technical requirement, but a prerequisite for informational self-determination: people must know that they are interacting with an AI and must be able to understand how their data is being used. This is particularly critical with prompts in language models – here sensitive information can be input whose whereabouts remain unclear.
The transparency obligation means concretely:
- Disclosure of AI use (e.g., "Your diagnosis was created with the support of an AI system")
- Explanation of which data flows into the AI
- Clarity about possible storage or further processing (e.g., for AI training)
Case Examples: Coop and X Show the Practice
The FDPIC has resolved two prominent cases:
Coop Video Surveillance (October 2025): The cooperative deployed intelligent video cameras at self-checkout registers. The AI was intended to detect shoplifting. The FDPIC examined this application and ruled: data protection compliant. This does not mean that surveillance is blanketly OK, but rather that Coop fulfilled the necessary protective measures (notification, purpose limitation, retention period).
X/Grok Training (March 2025): The X platform (formerly Twitter) wanted to use public posts from users to train the AI model Grok. The FDPIC required: users must have the ability to object. X accepted this and implemented an opt-out right. For Swiss users, the use of their data is therefore not automatic, but rather linked to their informed consent or at least to the right of objection.
These cases show that the FDPIC examines AI applications pragmatically but sets high standards for transparency and user rights.
Focus Fields: Health, Education, Work
The conference at the University of Lausanne concentrates on three sensitive sectors:
- Health: AI diagnosis, treatment recommendations, medical image analysis – all of this requires special care, as health data is among the most sensitive.
- Education: Adaptive learning platforms, automated grading systems, student profiling – risk of discrimination and false predictions.
- Work: Automated application screening, AI-based performance evaluation, work time monitoring – questions of fairness and control.
In each of these areas, transparency regarding AI use is essential.
Core Messages
- The current Data Protection Act is formulated in a technology-neutral manner and applies directly to AI applications once personal data is processed.
- Transparency is the central protective measure: affected persons must know that and how an AI uses their data.
- The FDPIC sets practical standards through individual case reviews (Coop, X) rather than waiting for new specialized laws.
- Rights of objection and opt-out options are an important counterbalance to automated data processing.
- The fields of health, education, and work require increased attention when deploying AI.
Stakeholders & Affected Parties
| Stakeholder | Role |
|---|---|
| FDPIC | Supervisory and advisory function; sets standards through case practice |
| Organizations (Coop, X, etc.) | Must demonstrate DSG compliance in AI deployment |
| Users & Patients | Have the right to transparency and objection |
| Educational & Health Institutions | Bear high responsibility for AI-supported decisions |
| AI Developers & Researchers | Must integrate data protection into development |
Opportunities & Risks
| Opportunities | Risks |
|---|---|
| Legal Certainty Through Technology-Neutral Rules: Organizations do not have to wait for new specialized laws but can rely on existing law. | High Implementation Burden: Many organizations still lack sufficient understanding of DSG requirements and need support. |
| Pragmatic Case Review: The FDPIC signals openness to AI innovation when data protection is maintained. | Intransparent Black Boxes: AI models are often unexplainable; transparency obligation cannot always be fulfilled. |
| User Rights (Objection, Opt-out): The DSG ensures affected persons have concrete objection options. | Data Security of Prompts: When users input sensitive data into AI systems, abuse or leakage threatens. |
| Competitive Advantage for Responsible Actors: Organizations with high transparency standards gain trust. | International Fragmentation: Different national regulations (EU AI Act vs. Swiss DSG) complicate global AI use. |
Action Relevance
For Organizations in Health, Education, Work:
Immediate DSG Audit for AI Projects:
- Which personal data flows into the AI system?
- Is there a legal basis (contract, consent, obligation)?
- How long is data stored?
Implement Transparency Measures:
- Affected persons must learn that AI is being used
- Document how AI decisions are made
- Offer objection options
Risk Mitigation:
- Test AI models for bias and discrimination
- Establish control processes for high-risk decisions (e.g., diagnoses, application screening)
- Train employees in AI governance
Observation Indicators:
- Has the FDPIC taken a position on your sector/application?
- Are there new preliminary clarifications or investigations?
- How do competitors regulate transparency and user rights?
- What international standards (EU AI Act) could influence Swiss practice in the future?
Quality Assurance & Fact-Checking
- [x] Central statements verified: DSG has been in force since 1.9.2023 and is technology-neutral (Source: FDPIC website)
- [x] Case closures verified: Coop (2.10.2025), X/Grok (20.3.2025), DSG Update (8.5.2025)
- [x] Speaker identified: Joël de Montmollin, Head of Health/Work/Education Department at FDPIC
- [x] Historical context correct: "Convention 108" of 28.1.1981, International Data Protection Day since 2007
- [x] No unverified marks – all statements taken from source
Supplementary Research
⚠️ No additional sources provided in metadata. Recommended external research:
- Official FDPIC Report on AI Activities 2025: Detailed analysis of Coop and X cases
- European AI Act (2024): Comparison with Swiss approach; legal implications for Swiss organizations
- Sector-Specific Data Protection Guidelines: Especially for health and education (e.g., from professional organizations)
Source List
Primary Source:
Data Protection Day 2026: Use of Generative AI and Challenges for Data Protection – Federal Data Protection and Information Commissioner (FDPIC), January 28, 2026
Supplementary Sources (cited in primary source):
- Closure of Preliminary Clarification Coop: Use of Intelligent Video Surveillance Cameras – FDPIC, October 2, 2025
- Closure of Preliminary Clarification X (formerly Twitter): Use of Personal Data for AI Training Grok – FDPIC, March 20, 2025
- Update – Current Data Protection Act is Directly Applicable to AI – FDPIC, May 8, 2025
- Council of Europe Convention on Data Protection (Convention 108) – Council of Europe, January 28