Summary

Shadow AI – the uncontrolled use of AI tools like ChatGPT and DeepSeek by employees – is spreading through companies faster than official IT policies. While IT departments are still developing compliance frameworks, marketers and developers are already using approved AI solutions independently. The phenomenon extends classic shadow IT to a new dimension of data outflow and presents companies with significant security and governance challenges.

People

Topics

  • Artificial Intelligence
  • Shadow IT
  • IT Governance
  • Data Security
  • Corporate Compliance

Clarus Lead

The uncontrolled proliferation of AI tools in the workplace – called Shadow AI – poses fundamental governance problems for companies. Employees independently use ChatGPT, DeepSeek, and similar systems to accelerate tasks that previously took hours. The phenomenon differs from classic shadow IT through the extent of uncontrolled data flows and the low barrier to entry for using intuitively operable tools.

Detailed Summary

Shadow AI refers to the use of AI systems in a business context that have not been approved, reviewed, or monitored by the IT department. This form of shadow IT is a continuation of the known practice of uncontrolled software use – but with qualitative differences. The speed of proliferation, the user-friendliness of these tools, and the volume of data that flows daily into external systems create new risks.

Employees activate these tools primarily for pragmatic reasons: efficiency gains are significant. Tasks that previously required several hours are solved in seconds. Marketing teams write copy with ChatGPT, developers use DeepSeek for code generation. The low barrier to adoption – no installation, no training required – makes governance difficult. Companies must weigh increased productivity against loss of control.

Key Findings

  • Shadow AI is the continuation of shadow IT with a new data flow dimension
  • AI tools are used independently by employees, not officially approved
  • Primary drivers are efficiency gains: tasks that took hours become second-long solutions
  • Companies must actively bring light to the shadows – through transparency and clear governance frameworks

Critical Questions

  1. Data Quality/Source Validity: What empirical data exists on the extent of Shadow AI in German companies? Is the analysis based on studies or assumptions?

  2. Conflicts of Interest: To what extent do AI providers influence public perception of Shadow AI? Are there incentives to normalize this usage as inevitable?

  3. Causality: Is uncontrolled AI usage a consequence of poor IT governance or a symptom of a legitimate productivity gap that companies cannot officially close?

  4. Alternatives: What governance models exist that combine security and control with innovation speed? Should companies regulate or integrate AI tools more strongly?

  5. Risks/Side Effects: What specific data security and compliance risks arise? How do data losses affect regulatory requirements (GDPR, NIS2)?

  6. Feasibility: How can companies distinguish between employee protection and monitoring? Where is the balance between control needs and a culture of trust?


Sources

Primary Source: Shadow AI: Why AI Makes the Problem Even Bigger – and How Companies Should Handle It – t3n.de, Karsten Zunke, 19.02.2026

Verification Status: ✓ 19.02.2026


This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Check: 19.02.2026