Summary
The American military is using artificial intelligence extensively in the Iran War to accelerate decision-making. The Claude AI model from Anthropic helped strike 2,000 targets in just four days – an unprecedented pace. Experts warn that AI could undermine human judgment in lethal decisions and is placing fundamental pressure on international humanitarian law. Internationally, negotiations on binding rules for autonomous weapons systems are failing due to resistance from major powers.
People
- Michael Horowitz (Military Expert, University of Pennsylvania)
- Jessica Dorsey (International Law Scholar, Utrecht University)
- Reto Wollenmann (Swiss Arms Control Expert)
Topics
- Autonomous weapons systems and military AI
- International humanitarian law and warfare
- International arms control negotiations
- Human control over weapons deployment
Clarus Lead
The American military is using the Claude AI model from Anthropic in the ongoing Iran War for rapid target identification. In 24 hours, 1,000 targets were struck; in four days, 2,000 total – a historic acceleration compared to previous conflicts. The pace raises central questions: Can commanders adequately weigh legal and ethical considerations when AI systems decide life and death in milliseconds?
Detailed Summary
Autonomous weapons systems are no longer science fiction, but have been operationally deployed for decades. Guided missiles like the American Javelin find their targets independently based on infrared or radar – without AI. Kamikaze drones, used massively in the Ukraine War, hover over territory until their sensors detect a target. This autonomy is established and legal.
However, AI fundamentally sharpens the problem: it compresses the "kill chain" – the path to a lethal decision – to the point where human reflection becomes practically impossible. International law scholar Dorsey warns of "automation bias," the tendency of people to accept machine suggestions without question. In the worst case, "cognitive capitulation" occurs, when humans intellectually subordinate themselves to the computer. The core problem: international humanitarian law demands context-dependent, qualitative judgments – but AI replaces them with numerical assessments.
A concrete case illustrates the danger: An attack on a girls' school in Iran killed over 170 people, mostly children. The school had not been used as a military facility for decades. Whether Claude suggested it as a target based on outdated satellite data or human error was responsible remains unclear – the Pentagon is investigating.
Diplomatic Paralysis
At the international level, major powers are blocking binding rules. Switzerland and 42 other countries have been demanding since 2014 at the Convention on Certain Conventional Weapons (CCW) that humans be able to understand AI recommendations rather than confirm them with the push of a button. However, the USA, China, and Russia reject this – they are only interested in the legal compatibility of the result, not the decision-making process. At the REAIM Summit in February 2026, the USA and China refused to sign even non-binding declarations.
Key Points
- AI radically shortens decision times: 2,000 targets in four days instead of six months in the ISIS war
- International humanitarian law demands qualitative, context-dependent judgments – AI replaces these with algorithms
- Human control becomes a fiction when speed makes reflection impossible
- Automation bias causes people to follow machine recommendations without reflection
- International negotiations fail because major powers demand autonomy without control
- Concrete errors such as the attack on a school facility demonstrate implementation risks
Critical Questions
Evidence/Data Quality: How old were the satellite data on which Claude analyzed the girls' school as a target? What error rate does the AI show with outdated input data?
Conflicts of Interest: Anthropic earns $200 million from the Pentagon contract – how credible can stated ethical concerns be if the business model depends on military contracts?
Causality: Is it proven that the increased number of attacks (2,000 in four days) leads to more civilian casualties than slower, manually coordinated operations?
Feasibility: If the Pentagon report shows that it is "unclear" who is responsible for the school attack – how is accountability supposed to work at all?
Alternatives: Why are major powers not investing in systems that make AI recommendations transparent instead of accelerating them?
Counter-hypotheses: Could slower, AI-assisted systems (without automation pressure) better comply with humanitarian law than current manual procedures?
Source List
Primary Source: Artificial Intelligence is the Accelerant of the Iran War – Neue Zürcher Zeitung, Leonid Leiva Ariosa, 14.03.2026
Verification Status: ✓ 14.03.2026
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Checking: 14.03.2026