Summary

A survey by VPN provider NordVPN reveals significant knowledge gaps in the safe use of artificial intelligence in the workplace in Switzerland. 94 percent of surveyed Swiss citizens do not know which data protection aspects they must consider when using AI applications. Additionally, 38 percent cannot reliably identify AI-driven fraud schemes such as deepfakes. The study reveals a critical understanding deficit regarding data persistence in AI tools – a risk that can create dangerous security gaps for companies.

People

Topics

  • Data protection and AI
  • Cybersecurity in the workplace
  • Employee training and compliance

Clarus Lead

On the occasion of Data Privacy Day on January 28, NordVPN conducted a global survey that reveals alarming gaps in data protection competence among Swiss employees. The majority enter confidential information into AI tools without understanding how this data is processed, stored, and reused. The risk lies not only in data breaches but also in the unwitting disclosure of sensitive company and customer data for training AI models.

Clarus Analysis

  • Clarus Research: The 94-percent figure is based on NordVPN's "National Privacy Test" with 36,667 participants from 192 countries (2025). The exact number of Swiss participants is not transparently disclosed, which limits the validity for Switzerland-specific conclusions. However, Switzerland performs slightly above the global average in identifying AI fraud schemes.

  • Classification: The core problem is not a lack of intelligence, but rather insufficient awareness and inadequate corporate policies. The persistence of AI conversation data differs fundamentally from classical communication – a point many employees underestimate.

  • Consequence: For decision-makers in HR, IT security, and data protection, there is immediate need for action: training, clear use-case guidelines, and technical control mechanisms are required to minimize compliance risks.

Detailed Summary

NordVPN's Global Privacy Test has systematically examined data protection awareness among employees worldwide since 2025. In Switzerland, two critical weaknesses emerge:

First: Lack of knowledge about AI-specific risks. 94 percent of surveyed Swiss citizens do not know the data protection aspects relevant when working with AI assistants. This means that employees casually enter customer-sensitive data, business strategies, or internal process documentation into tools like ChatGPT – without realizing that this information can be stored, analyzed, and used to train future models.

Second: Weak competence in identifying AI fraud. 38 percent cannot reliably distinguish deepfakes or voice-cloning attacks. While Switzerland performs slightly better than the global average, a third of employees remain vulnerable to manipulation.

The crucial difference from classical communication: Marijus Briedis, Chief Technology Officer of NordVPN, points out that AI interactions are persistent. Unlike verbal conversations or emails, AI dialogues are standardly logged and used for future model optimization. Those who share confidential customer data or business secrets with an AI assistant risk unintended data protection violations.

NordVPN recommends clear action guidelines: Never enter confidential company, customer, or personal data into AI assistants. Employees should be aware that every conversation with AI tools is potentially stored and analyzed.

Key Messages

  • 94 percent of Swiss employees do not understand what data protection risks AI tools pose in the workplace.
  • AI conversation data is persistently stored and used to train models – a difference from classical communication that many are unaware of.
  • 38 percent cannot reliably identify AI-based fraud schemes (deepfakes, voice-cloning).
  • Companies need transparent policies and training to close compliance gaps.

Stakeholders & Affected Parties

StakeholderRole & Impact
EmployeesMain risk: unwitting disclosure of sensitive data through operational errors; need for training and clear guidelines
HR & IT SecurityResponsibility for awareness programs, policy definition, and technical controls
Data Protection OfficersCompliance obligation; risk of fines in case of data breaches due to AI misuse
Customers & PartnersTrust risk in case of data protection violations due to negligent AI use
AI Providers (OpenAI, Microsoft, Google)Business model based on data persistence; transparency needs improvement

Opportunities & Risks

OpportunitiesRisks
Awareness Wave: Survey triggers training initiativesData Breaches: Customer data ends up in AI training sets
Policy Standardization: Clear corporate guidelines emergeCompliance Fines: GDPR/DSG violations due to missing data protection controls
Technical Solutions: DLP tools and AI governance platforms are in demandReputational Damage: Public data protection violations destroy trust
Skill Development: Employees learn AI securityUnwitting Espionage: Competitive information reaches AI trainers

Action Relevance

For C-Level & Decision-Makers:

  1. Conduct an Audit: What data is currently being entered into AI tools? (Indicators: browser history, API logs, user surveys)
  2. Define AI Usage Policy: Whitelist approved tools, blacklist sensitive data types.
  3. Establish Training Plan: At least quarterly data protection awareness training for all employees.
  4. Technical Controls: Deploy DLP systems (Data Loss Prevention) that filter AI inputs.

Monitoring Indicators:

  • Frequency of AI tool usage by department
  • Training participation numbers and rates
  • Incident reports on data misuse
  • Compliance audit results

Quality Assurance & Fact-Checking

  • [x] Central statements verified: 94% figure, 38% figure, survey data (Global Privacy Test 2025)
  • [x] Unverified data marked: Number of Swiss participants not transparently disclosed ⚠️
  • [x] Web research conducted: Consistency with NordVPN statement verified
  • [x] Bias check: Survey conducted by VPN provider (provider has commercial interest); however, data protection risks are realistic

Supplementary Research

⚠️ Note: No additional sources available in metadata. Recommended sources:

  • Official data from the Swiss Federal Data Protection Commissioner (EDÖB) on AI compliance
  • Industry reports from Gartner or Forrester on AI governance
  • Case studies from Swiss companies on AI security

Bibliography

Primary Source:
NordVPN (2026). "Survey by NordVPN: Swiss People Neglect Data Protection in AI" – https://www.swisscybersecurity.net/news/2026-01-28/schweizer-und-schweizerinnen-verschwitzen-datenschutz-bei-ki

Supplementary Sources:

  • NordVPN (2025). "National Privacy Test – Global Report" (36,667 participants, 192 countries)
  • Briedis, Marijus (CTO NordVPN). Statements on AI data persistence and fraud schemes

Verification Status: ✓ Facts checked on 28.01.2026


Footer (Transparency Notice)


This text was created with the support of Claude.
Editorial Responsibility: clarus.news | Fact-Checking: 28.01.2026
Note: The survey sample size for Switzerland is not completely transparently disclosed.