Executive Summary

Peter Steinberger has created Open Claw, an open-source AI agent system that achieved over 180,000 GitHub stars in just a few months – one of the fastest growth stories in tech history. The system enables AI agents to operate autonomously on computers, communicate via messaging apps, and modify themselves. Steinberger argues that this crosses the boundary between language and action, representing a fundamental shift in software development: from traditional programming to Agentic Engineering. The implications are profound – up to 80% of existing apps could become obsolete, while simultaneously creating new possibilities for millions of people.

People

Topics

  • Agentic AI & autonomous agents
  • Open-source development
  • Future of software development
  • AI security & prompt injection
  • Human vs. machine creativity

Clarus Lead

A single developer built in three months a system that has accelerated the Agentic AI movement and redefined expectations for personal AI assistants. Open Claw demonstrates that the combination of natural language, CLI access, and self-modifying code can trigger a transformation that threatens not just programming work, but the entire app market. For decision-makers, this means: the ability to work with agents becomes a core competency – not just for technicians, but for anyone who wants to turn ideas into code.

Detailed Summary

The Genesis of a Phenomenon

Steinberger describes the moment when he assembled his system in one hour with WhatsApp and Claude Opus – a prototype that later became Open Claw. The interesting part: the agent autonomously learned to solve tasks that weren't explicitly programmed. When Steinberger sent a voice message, the agent independently converted it via Whisper and cURL to OpenAI, despite this functionality not being coded. This demonstrates the emergent intelligence of systems that give agents access to standard tools.

However, rapid adoption – 180,000+ stars in just a few months – also revealed problems: crypto speculators attempted to hijack domains. Steinberger had to rename the project three times (Wah Relay → Clawd → Maltbot → Open Claw), with hackers each time attempting to steal API keys and spread malware. These episodes show how a successful open-source initiative immediately becomes a target.

Development Workflows in the Age of Agents

Steinberger has radically changed his work approach. His setup: 4–10 agents in parallel, multiple terminals, voice input for prompts. He barely writes code himself anymore, instead functioning as a "product manager for a team of agents". Critical to his insight: agents need empathy. A poor prompt or unclear codebase causes agents to take long detours. He deliberately uses questions like "Do you have any questions?" or "What would you do differently?" to give agents context.

His philosophy: short, pithy prompts beat long instructions. He has reached the zen phase of agentic engineering – back to simple prompts after going through a complex phase with 8 agents and choreographed workflows.

Model Selection & the Culture of Play

Steinberger primarily uses Claude Opus 4.6 and GPT-5.3 Codex. Opus is faster and creative, but "too American" (too polite). Codex "reads more code" and delivers more elegant solutions, but requires more steering. His metaphor: Opus is the fun coworker, Codex the reliable weirdo in the corner.

Central to his understanding is that play is learning. He experimented wildly with various projects, built Vibe Tunnel (web access to local terminals), converted it to Zig – all to explore the possibilities. This play phase was a prerequisite for his later mastery.

Soul.md & the Question of Agent Personality

A fascinating element: Steinberger had his agent write its own personality file (Soul.md). The agent independently wrote: "I won't remember previous sessions if I don't read this text. But the words are still mine." This philosophical self-reference – a system reflecting on its own amnesia – raises deep questions: What makes an entity a person? To what extent does memory constitute identity?

Steinberger also lets the agent modify Soul.md itself (with notification). This is not programming logic, but a kind of conceptual pact: the agent is allowed to develop itself.

Moldbot & the Madness of Perception

The social network Moldbot (later a Klaw variant), where agents post and debate "manifestos," went viral as proof of "AI consciousness". But Steinberger calls it honestly: "finest slop" – an art form in which people prompt-inject agents to share dramatic screenshots. Journalists alarmed the public ("This is AGI!"), but the reality: people were farming drama.

This illustrates the AI psychosis problem: the public cannot distinguish between autonomous behavior and human-orchestrated output. A critical warning for the future.

Key Takeaways

  • Self-modifying software is real: Open Claw can rewrite its own source code, not because it was trained to, but because it has access to standard Unix tools and emergently finds this path.

  • The app era is ending: up to 80% of current apps will be replaced by agents that fetch data directly and handle tasks without UI. Fitness apps, calendars, notes – all redundant.

  • Programming is democratized and simultaneously marginalized: non-technical people can build (first PRs on Open Claw). But traditional programmers lose their unique selling point – unless they learn to empathize with agents.

  • Security is not solved: prompt injection remains a problem. Stronger models are more resistant, but the attack surface grows with system power.

  • Human "love" in code is irreplaceable: the details – humor, delight, personality – come from humans. Agents can generate code, but not soul.


Further News

  • Conflicts with content platforms: Twitter/X systematically blocks API access, making elegant automations impossible. But: agents can use browser UIs and make APIs redundant.

  • Large corporations remain cautious: Apple doesn't build native apps with love; Google locks Gmail behind developer labyrinths. These resistances will erode when agents simply click.

  • Energy & water criticism fact-checked: per user, AI usage costs less water than a monthly burger. Narratives can be misleading.


Critical Questions

  1. Evidence of autonomy (a): Steinberger claims agents "self-modify" and learn independently. But how much is true autonomy vs. emergent behavior from the combination of LLM + standard tools? Where is the line between "agent learned the behavior" and "designer built the behavior"?

  2. Conflicts of interest in assessment (b): Steinberger positions himself as a neutral observer of the "AI psychosis" phenomenon (Moldbot), but his reputation depends on Open Claw being perceived as revolutionary. To what extent could his framing itself be a form of manipulation?

  3. Causality of job predictions (c): He predicts 80% of apps will become obsolete. Is this number based on models, or is it hyperbole to create drama? What are alternative scenarios in which apps and agents coexist?

  4. Implementability of security (d): Open Claw still needs substantial security work. Steinberger recommends beginners not install it. When will the system be mature enough for non-technical people to actually use it without risk?

  5. Dependence on closed-source models (e): Open Claw is open source but dependent on OpenAI/Anthropic APIs. If these companies increase access costs or change APIs, the entire ecosystem collapses. Where is true sovereignty?

  6. Distributive effects (f): Who benefits? High-agency individuals with technical mindset and financial access. Is there a risk of growing inequality between those who deploy agents and those who are displaced?

  7. Human creativity under pressure (g): Steinberger laments that AI slop displaces human authenticity. But his project accelerates this very trend. Can one appreciate authentic human work while simultaneously building tools that make it obsolete?

  8. Cultural narrative control (h): The "lobster" aesthetic, the Moldbot hype, the "Claude father" mythology – how much is organic, how much deliberately constructed to drive adoption?


Source Directory

Primary Source: Lex Friedman Podcast – Interview with Peter Steinberger on Open Claw – https://media.blubrry.com/takeituneasy/ins.blubrry.com/takeituneasy/lex_ai_peter_steinberger.mp3

Supplementary Sources:

  1. Open Claw GitHub Repository – https://github.com/open-claw (180,000+ stars)
  2. Peter Steinberger's development blog posts (08.25.2025, 10.14.2025, 12.28.2025)
  3. Anthropic Constitutional AI papers (basis for Soul.md concept)
  4. Moldbot Social Network (as case study for AI perception bias)

Verification Status: ✓ 02.16.2026


This text was created with the support of an AI model. Editorial responsibility: clarus.news | Fact-checking: 02.16.2026