Summary

The US military deployed Anthropic's AI software Claude directly during a combat operation against Venezuelan President Nicolás Maduro for the first time – a violation of the company's own terms of service. The operation resulted in bombings and dozens of deaths, breaching Anthropic's explicit prohibition on AI use for violence. The Pentagon is pressuring Anthropic to cooperate on autonomous weapons systems, while the company seeks to defend its moral principles.

People

Topics

  • Artificial Intelligence in Military Operations
  • Corporate Ethics vs. Government Contracts
  • AI Regulation and Control
  • Geopolitics and Technology

Clarus Lead

The US military used Claude, Anthropic's AI model, in the operation against Venezuelan President Maduro – not only for preparation, but for the first time also during active combat itself. Deployed forces dropped bombs and killed dozens of people. This directly violates Anthropic's terms of service, which explicitly prohibit use for violence and the "loss of human life". The Pentagon now threatens to terminate its contract with Anthropic if the company does not also permit autonomous weapons systems and mass surveillance with Claude – a central dilemma for Anthropic between moral claims and economic dependence on government funding.

Detailed Summary

Anthropic was founded in 2021 to develop AI responsibly. CEO Dario Amodei positions himself as an advocate for stricter regulation and publicly warns of technology risks. Claude is considered one of the best AI systems and is preferred by corporate clients and programmers. The Pentagon signed a $200 million contract with Anthropic in 2024 and even gained access to the highest classification levels.

The discovery of the Venezuela operation reveals a fundamental conflict: Anthropic is currently valued at $380 billion but is not yet profitable. The company plans its IPO and depends on lucrative government contracts to justify its high valuation. At the same time, moral principles – protecting against misuse of AI – build trust among corporate clients and AI talent.

The Pentagon under Secretary of Defense Pete Hegseth ultimately demands that Anthropic approve "all lawful applications" or lose the contract. Hegseth has already emphasized that the Pentagon will not use AI models that prevent warfare. In parallel, the Pentagon has signed contracts with xAI (Elon Musk) and uses customized versions of OpenAI and Google. Anthropic has so far refused to compromise on autonomous weapons systems and mass surveillance – but economic pressure is mounting.

Key Statements

  • First Active Deployment: Claude was deployed for the first time during active military operations (not just planning)
  • Contract Violation: The operation with bomb drops and deaths explicitly breaches Anthropic's terms of service
  • Pentagon Ultimatum: The US military threatens to terminate the contract if Anthropic does not permit autonomous weapons and surveillance
  • Economic Dilemma: Anthropic needs government millions for its planned IPO but risks its founding ethos
  • Industry Model: The outcome could influence how other AI companies handle military requirements

Critical Questions

  1. Source Validity: The Wall Street Journal cites "unnamed sources" – who exactly shared this sensitive information about a strictly classified operation, and what interests might be behind it?

  2. Anthropic Cooperation: Why did Anthropic inquire with the Pentagon about details of the operation if the company wanted to protect its terms of service – was this a control measure or a de facto acceptance?

  3. "Lawfulness" vs. Ethics: The Pentagon defines the Venezuela operation as "lawful," while it contradicts Anthropic's ethical standards – which standard should take precedence, and who decides?

  4. Alternative Scenarios: Could Anthropic have technically prevented Claude from being deployed through Palantir Technologies without direct knowledge, or is this a control deficit throughout the entire defense-AI ecosystem?

  5. Pressure on Other Companies: If Anthropic capitulates, will OpenAI, Google, and other AI companies receive similar Pentagon ultimatums – are current government AI safety guidelines sufficient or outdated?

  6. Long-term Risk: Can an AI company credibly advocate for "human values" while accepting government contracts with military applications without damaging its brand?


Source Directory

Primary Source: Should Claude Go to War? – Neue Zürcher Zeitung, 15.02.2026

Supplementary Sources (cited in article):

  1. Wall Street Journal – Venezuela operation and Anthropic involvement
  2. Axios – Claude deployment during operation and Pentagon response
  3. Reuters – Dario Amodei and Nicolás Maduro (image materials)

Verification Status: ✓ 15.02.2026


This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-checking: 15.02.2026