Summary
Anthropic is now blocking the use of OpenClaw – a third-party AI assistant tool – for customers of its Claude subscription models. OpenClaw developer Peter Steinberger, now employed at OpenAI, was unable to reverse Anthropic's decision. Subscribers are offered the alternative option to purchase paid add-on packages to use third-party tools. API customers remain unrestricted, as do users in regions without government AI agent bans.
People
- Peter Steinberger (OpenClaw developer, OpenAI employee)
- Dave Morin (OpenClaw colleague)
- Boris Cherny (Claude Code Lead, Anthropic)
Topics
- AI security and resource management
- Third-party integration in AI services
- Token costs and usage fee models
Clarus Lead
The blockade reflects a fundamental conflict in the AI market: While subscription models suggest unlimited usage for private users, autonomous tools like OpenClaw consume enormous resources – one day of usage cost over 100 US dollars in tokens during tests. Anthropic is signaling capacity constraints and prioritizing paying enterprise customers via the API, a pattern that other AI providers may adopt in the future. Pressure is heightened regionally by government restrictions: China has already banned AI agents on office computers.
Detailed Summary
The blockade was not technically intended from the start. OpenClaw users exploited a configuration gap to use subscription tokens for the third-party tool as well. Anthropic closed this gap and justified the decision with exploding Claude demand and capacity protection.
The cost reality underscores Anthropic's position: A test in the Heise video magazine c't 3003 with Claude Opus showed daily token consumption of over 109.55 US dollars after just 24 hours of OpenClaw usage. For context: Anthropic estimates average daily costs for professional Claude Code users at approximately 6 dollars (assuming 90 percent of team costs remain under 12 dollars daily). OpenClaw's consumption is therefore 18 times higher. The measure primarily relieves Anthropic's infrastructure for API customers, mostly in the professional sector, and reduces strain from private OpenClaw enthusiasts. In parallel, regulatory hurdles are tightening the scope: Chinese authorities are already issuing bans on AI agents on office computers, concerned about autonomous data access as a security risk.
Key Takeaways
- Anthropic blocks OpenClaw integration for Claude subscriptions; subscribers can optionally purchase add-on packages
- Massive token consumption (100+ USD/day) justifies resource rationing in favor of API customers
- Government AI security bans (China) reduce the market for autonomous office tools anyway
Critical Questions
Evidence: Is Anthropic's capacity shortage based on usage data from OpenClaw, or is it structurally organic caused by increasing Claude demand? How transparently does Anthropic communicate this breakdown?
Conflicts of Interest: Does the blockade specifically favor higher-margin API customers over cheaper subscription users? Which business division benefits primarily?
Causality: How certain is it that the OpenClaw blockade actually stabilizes API performance? Could price increases for subscriptions also solve the problem without a blockade?
Feasibility: Do the new add-on packages mean a new fee model, and how transparently is their pricing communicated? Can users be protected from surprise costs?
Sources
Primary Source: Anthropic throws OpenClaw out of its Claude subscriptions – Heise News, Author: Niklas Jan Engelking – https://www.heise.de/news/Anthropic-wirft-OpenClaw-aus-seinen-Claude-Abos-11246063.html
Verification Status: ✓ 2025
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Check: 2025