Summary

The AI company Anthropic accidentally released the source code of its AI agent Claude Code at the end of March 2026. The cause was a chain of two operational errors: an internal file with a reference to an unprotected source code server was published in a public update. Despite copyright lawsuits, the code spread virally on the internet, with over 100,000 developers already creating their own versions. Claude Code is Anthropic's core product for automated programming and is expected to generate approximately 2.5 billion dollars in revenue in 2026.

People

Topics

  • AI safety and data protection
  • Corporate security culture
  • Source code leaks and copyright
  • Geopolitical tensions (USA-China, Pentagon-Anthropic)

Clarus Lead

The leak hits Anthropic at the worst possible time: the company positions itself as a pioneer of responsible AI security and uses this argument against competitors like OpenAI. At the same time, the conflict with the Trump administration over autonomous weapons is intensifying – a government decision classified Anthropic as a supply chain risk. A series of security breaches (including 3,000 unprotected documents found by Fortune researchers) undermines the credibility of this security message and could jeopardize long-term business partnerships, although user dependence on Claude Code initially remains stable.


Detailed Summary

The error resulted from inadequate release processes: during an update on the night of March 31, the public file package contained a configuration file that should normally have remained internal. This pointed to a second critical error – a password-less server storage containing the complete Claude Code source code. The leaked material revealed the so-called "Harness" of Claude Code: the orchestration software that controls AI models, manages files, and distributes subtasks.

Florian Tramèr from ETH Zurich diagnoses a "fragmented security culture" typical of fast-growing Silicon Valley startups. The leak also revealed new untruncated features: more autonomous AI agents with automatic background routines and expanded access permissions to user data. This is similar to technologies developed by Peter Steinberger (formerly Anthropic, now OpenAI) and increases data breach risks. Claude Code already collects substantial amounts of data: private subscriptions store data for 30 days, and up to five years with training consent.

The viral spread occurred through a legal trick: a developer used OpenAI's Codex to rewrite the leaked version in a different programming language – thereby circumventing U.S. copyright law and achieving 50,000 validations in two hours on GitHub. Over 100,000 users created branches. Anthropic cannot prevent the spread.


Key Points

  • Anthropic CEO Dario Amodei lost Pentagon cooperation by rejecting autonomous weapons; new leaks undermine the company's security differentiation feature
  • The data breach resulted from faulty release processes, not hacker attacks – nonetheless causing reputation damage
  • Claude Code is strategically critical (2.5 billion dollar revenue forecast), but user dependence could provide short-term revenue protection
  • More autonomous AI agents with expanded permissions structurally increase data loss risk

Critical Questions

  1. Data Quality: How did Anthropic verify that only the "Harness" was leaked and that no training data or model parameters were included? Will independent security audits of the leaked content be published?

  2. Conflicts of Interest: Who in Anthropic's management bears responsibility for the lack of password protection policy? Are there differences between publicly communicated and actual security standards?

  3. Causality: Was the release failure isolated or a symptom of a fundamentally overwhelmed DevOps culture at Anthropic? Which other file package versions might contain similar errors?

  4. Feasibility & Risks: Can competitors or malicious actors exploit model weaknesses from the "Harness" code alone, or do they need additional access to training data? How quickly could more autonomous functions (with expanded permissions) lead to uncontrolled data access?

  5. Regulatory Consequences: Will the repeated security breaches lead to government investigations (FTC, CISA)? Could the Pentagon's decision to classify Anthropic as a supply chain risk be exacerbated through third-party blocking?

  6. User Protection: What transparency does Anthropic provide about how long data is actually deleted, and whether automated AI background processes compromise data protection?


Source Directory

Primary Source: Anthropic Accidentally Releases Source Code of Claude CodeNZZ, 03.04.2026 (Leonid Leiva Ariosa)

Supplementary Sources (mentioned):

  1. The Register (Report on Claude Code data collection)
  2. Fortune (3,000 unprotected Anthropic documents)
  3. GitHub (Distribution metrics of leaked versions)

Verification Status: ✓ 03.04.2026


This text was created with the assistance of an AI model.
Editorial responsibility: clarus.news | Fact-check: 03.04.2026