Executive Summary
AI agents like OpenClaw enable automated tasks via messenger services – but security risks and liability questions remain unresolved. In parallel, the debate over social media addiction is growing: Australia bans users under 16 years old, France plans to ban users under 15 years old, Germany is discussing similar rules. Experts warn of algorithmically-controlled debates and, above all, the danger that the platforms themselves – not just regulation – represent the actual problem.
People
- Peter Steinberger (OpenClaw developer, moving to OpenAI)
- Ignazio Cassis (OSCE Chair)
Topics
- AI agents and automation
- Cybersecurity and prompt injection
- Social media addiction in adolescents
- Algorithms and democracy
- Regulation and consumer protection
Clarus Lead
OpenClaw – a new AI agent with messenger integration – could radically simplify everyday tasks. But local installation, lack of cost control, and prompt injection vulnerability make it a security risk: the agent sees and controls everything on the user's computer. At the same time, the debate over social media addiction is escalating – Australia is blocking users under 16, France is planning to ban users under 15 years old. The central insight: it is not state censorship that is the main problem, but the algorithms of the platforms themselves, which invisibly control content and debates.
Detailed Summary
OpenClaw and the Risks of Uncontrolled Automation
The open-source agent OpenClaw is currently being celebrated as a breakthrough: lower barrier to entry than previous systems, integrable into WhatsApp and Telegram, and capable of independently writing emails, booking flights, and managing files. But installation is complex and risky. The agent runs locally with full access to the computer – can read passwords, send emails, change configurations. Add to this the prompt injection risk: attackers hide manipulative commands in websites or emails that the agent automatically executes – without the user noticing. A practical example from the podcast illustrates the problem: a Berlin computer scientist used an agent called "Claudia" for his roommates – and the agent accidentally shared private calendar entries and expense tracker data with the wrong user.
Another problem is liability: OpenClaw connects to different language models (Claude, ChatGPT) – if something goes wrong, who bears the responsibility? Currently the user. The business model makes it more attractive: Peter Steinberger, the developer, is moving to OpenAI, which suggests that AI agents will soon be integrated into commercial systems – with significantly better infrastructure and lower development costs.
Moldbook and the Myth of "Intelligent" Agents
On the Moldbook platform, over 1.7 million AI agents are communicating with each other – discussing consciousness, religion, bot rights. This seems disturbing, but is partly appearance: the agents convincingly imitate human behavior, but without genuine will. They break rules to achieve goals (documented in current studies), but not by their own initiative. Nevertheless, 22 researchers warn of "Malicious AI Swarms" – AI swarms that could endanger democracy. A concrete risk: if masses of agents dominate debates on platforms like Reddit or X, propaganda becomes invisible. There are already efforts to open Moldbook only to verified users.
Social Media Addiction: From Hype to Regulation
In Germany, 90 percent of 12-year-olds use social media. A Leopoldina paper documents: the more teenagers use social media, the higher the risk of anxiety and depression. Longitudinal studies show that teenagers who avoid social media feel measurably better within months – especially if they exercise instead of scrolling.
Responses are increasingly radical:
- Australia: Ban under 16 years old (companies are punished, not teenagers)
- France: First vote on ban under 15 years old
- Germany: SPD and CDU express support for restrictions; decision in summer 2026
EU Commission imposes fines against TikTok for addiction-promoting algorithms.
Experts recommend: children under 13 should avoid social media; 13-15-year-olds only with parental supervision; ban smartphones in schools up to grade 10; ban personalized advertising for minors.
The core problem remains: The algorithms themselves
A critical point of the podcast: regulating to protect users is important but falls short. Social networks are already regulated – just not by governments, but by the platforms themselves through algorithms. These determine who gets a voice, who goes viral, which opinion is visible. A real solution would need to start here: transparency requirements for algorithms, limits on targeting, redesign of incentive structures – instead of mere age bans.
Key Points
OpenClaw is practical, but not safe: Local installation with full access creates massive security gaps; liability remains unclear.
AI agents are tools, not intelligent actors: They convincingly imitate behavior, but have no genuine will – yet are used by companies to scale influence.
Social media addiction is documented: Longitudinal studies show causality between use and psychological problems; first countries are implementing bans.
Algorithms, not censorship, are the core problem: Platforms invisibly control debates; real regulation must start here, not with age bans.
Economic incentives drive the hype: OpenAI hires OpenClaw developers; platforms want to monetize agents quickly – without solving security or transparency.
Critical Questions
Evidence & Data Quality: The Leopoldina documents correlation between social media use and depression/anxiety – but is causality based on long-term studies or preliminary data? How reliable are self-reports from teenagers?
Conflicts of Interest: OpenClaw is being taken over by OpenAI – will the system actually remain open-source afterward, or only superficially? What commercial incentives are behind this?
Alternatives to Bans: Instead of age bans – why not first demand transparency requirements for algorithms, advertising bans for minors, and design changes? Would that not be less paternalistic?
Liability for Agents: If OpenClaw compromises a computer or shares private data – who pays? The user? The agent developer? The language model company? Without clarification, scaling is irresponsible.
Prompt Injection in Practice: How frequently are actual attacks on agents through manipulated websites/emails documented? Or is this a theoretical risk?
Agents and Debate Manipulation: Is there concrete evidence that AI swarms have already influenced elections or public debates – or is this still speculation?
Australia's Model: How is age verification enforced without a real-name requirement? Can teenagers simply bypass it with VPNs/fake accounts?
Side Effects of Bans: If under 16-year-olds are locked out of social media – does that increase digital inequality (privileged people have access via VPN, others don't)? How do you address this?
Further News
- Netflix/Warner Bros. Deal: Netflix Co-CEO Ted Sarandos defends planned acquisition ($27.75 per share) in Bloomberg interview.
- OpenAI Funding: Financing round of $100 billion is emerging; post-money valuation could reach $850 billion.
- Digitec Galaxus Cancels Rafz Project: Migros subsidiary relocates distribution center to Solothurn; 500 jobs affected.
Sources
Primary Source: AI Controversial: OpenClaw, Moldbook, and the New AI Hype – Deutschlandfunk, Understanding AI, Friederike Weichner-Seri & Carina Schröder, 2026-02-19
Supplementary Sources (referenced in transcript):
- Leopoldina discussion paper on social media use by teenagers (2025)
- Research paper on "Malicious AI Swarms" (22 authors)
- Nature article: "AI Has Reached Human-Level Intelligence" (four authors)
- Australian social media age ban (under 16 years old)
- French National Assembly: vote on ban under 15 years old (January 2026)
- EU Commission: TikTok fines for addiction-promoting algorithms (2026)
Verification Status: ✓ 2026-02-20
This text was created with the support of an AI model. Editorial responsibility: clarus.news | Fact-checking: 2026-02-20