Summary

AI agents like OpenClaw mark a turning point in artificial intelligence development. Unlike classical chatbots, these systems can autonomously execute complex tasks – writing emails, making phone calls, controlling devices. A self-experiment by the Handelsblatt tech team shows: the technology already works impressively, but carries significant security and cost risks. For individual users it remains experimental; for enterprises, new liability questions arise.

People

Topics

  • AI agents and automation
  • Cybersecurity and data protection
  • Corporate responsibility
  • Technological disruption

Clarus Lead

AI agents like OpenClaw function fundamentally differently from previous language models. They require no manual steps but execute complex command chains autonomously – researching, summarizing, sending emails, asking follow-up questions. The breakthrough succeeded only when language models showed massively improved performance starting December 2025. For decision-makers this is critical: The more access and rights such systems receive, the higher both utility and risk simultaneously. Enterprises must establish clear governance to clarify liability questions.

Detailed Summary

The difference between chatbots and agents lies in autonomy. A chatbot answers questions; an agent performs operations – without asking for approval at each step. Stefan Scheuer tested OpenClaw on an isolated laptop. His practical example: he described the problem "Is it raining exactly during my 15-minute drive time window to daycare?". The agent automatically retrieved data from the German Weather Service, processed it within the correct timeframe, and delivered daily precise forecasts – a solution that classical weather apps don't offer.

This capability emerges from massive data volumes per query. The agent sends book-sized data volumes per query, which significantly drives operational costs. Users report monthly computing capacity costs in the five-figure dollar range. Security is critical: the agent constantly requests additional access – credit card data, camera access, microphone status. Each additional permission increases both utility and potential for misuse. Scheuer repeatedly declined, describing it as an "uneasy feeling."

For enterprises, deployment is more complex: they cannot use uncontrolled agents but must define specific scenarios with guardrails. With uncontrolled deployment of multiple agents, antitrust lawsuits threaten (e.g., if prices are automatically coordinated) and massive liability questions arise.

Key Statements

  • Autonomy leap: Agents execute multi-step tasks independently, not just answer chatbot questions
  • Cost problem: Operational costs are drastically higher than with language models
  • Security governance required: Each additional access multiplies risks exponentially
  • Enterprise liability central: Uncontrolled agents endanger legal compliance

Critical Questions

1. (a) Data Quality: How reliable are the data that AI agents independently obtain (e.g., from the German Weather Service)? Who validates whether the agent accesses the correct sources or makes errors in data interpretation?

2. (b) Conflicts of Interest: What economic incentives does the provider (e.g., OpenAI) have to improve cost efficiency when higher computing costs also mean higher revenues?

3. (c) Causality – Alternatives: Is OpenClaw really the first functioning agent, or were there earlier prototypes whose failures were not documented? Are the December 2025 model improvements technical or just marketing narratives?

4. (d) Implementation Risks – Enterprises: If an agent in an enterprise autonomously coordinates prices with competitor systems, who bears antitrust liability – the enterprise, the agent provider, or both?

5. (a) Source Validity: The article cites "experience reports on the internet" regarding high costs without naming sources. How systematic is this data?

6. (b) Conflicts of Interest – User: Stefan Scheuer works at Handelsblatt. How neutral is his test when the article simultaneously promotes Handelsblatt subscriptions and markets AI as a cover story?

7. (c) Counter-Hypothesis: Could the previous restraint with AI agents (2024–2025) not be due to technological limitations, but rather deliberate regulatory policy to avoid uncontrolled automation?

8. (d) Side Effects – Labor Market: The article does not mention how many professional fields (administration, legal research, customer service) could be automated by such agents. How quickly could the labor market effect occur?


Source Directory

Primary Source: Handelsblatt Today – Podcast Episode from February 13, 2026 https://audio.podigee-cdn.net/2323985-m-2d536c585574bd14d0f5e2de1f2799ce.mp3?source=feed

Verification Status: ✓ 2026-02-13


This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Checking: 2026-02-13