Author: heise.de / t3n.de
Source: heise.de
**Publication Date: 28.11.2025
Summary Reading Time: 4 minutes
Executive Summary
While companies like HP are replacing thousands of jobs with AI, the very people training these systems are warning against their use – some even prohibit their own children from using ChatGPT & Co. The paradoxical finding: AI trainers work under time pressure, receive inadequate briefings, and make systematic errors that flow directly into the models. A Newsguard study proves: The error rate of chatbots has nearly doubled from 18 to 35 percent within one year – while simultaneously, the systems are less likely to admit not knowing something. Decision-makers should recognize: The supposed "intelligence" is based on opaque, qualitatively questionable training processes – and carries significant liability and reputational risks.
Critical Key Questions
- Who bears responsibility when companies make wrong decisions based on faulty AI outputs – the platform operators, the clients, or the precariously employed trainers without adequate briefing?
- Where is the line between innovation and irresponsibility when systems are deliberately designed to give false answers rather than no answers at all?
- What market opportunities arise for providers who focus on transparency, quality assurance, and ethical standards in AI training – instead of cheap mass production?
Scenario Analysis: Future Perspectives
Short-term (1 year):
Companies continue to heavily invest in AI-powered automation (see HP: 6,000 positions). Simultaneously, public cases of AI errors in critical areas accumulate (customer communication, legal advice, medicine). First liability lawsuits against platform operators and employers who blindly trusted AI. Regulatory pressure increases (EU AI Act).
Medium-term (5 years):
Quality assurance in AI training becomes a competitive factor. Certification systems for "responsible AI" emerge. Companies currently relying on opaque low-cost providers lose market trust. In parallel: Consolidation among AI training platforms – providers with ethical standards gain market share. Trade unions and NGOs demand binding standards for working conditions in AI training.
Long-term (10–20 years):
AI systems are increasingly trained by AI (Synthetic Data), creating new quality and bias problems ("Model Collapse"). Social backlash against uncritical AI use leads to stricter approval procedures (similar to the pharmaceutical industry). Companies that focused early on transparency and ethical standards dominate – while pioneers of the "Move fast and break things" era lose relevance.
Main Summary
a) Core Topic & Context
The article exposes a fundamental paradox of the AI industry: While corporations like HP are replacing thousands of jobs with "intelligent" systems, the people training these systems warn against their use. The reason: AI training is based on precariously employed workers recruited through platforms like Amazon Mechanical Turk, who receive inadequate briefings and produce systematic errors under time pressure. Research by The Guardian shows: Even experienced AI trainers doubt the quality of their work and advise family and friends against using these systems.
b) Most Important Facts & Figures
- HP plans to cut 6,000 jobs through AI-powered automation
- Chatbot error rate increased from 18% (2024) to 35% (2025) – doubling within one year (Newsguard study)
- Non-response rate decreased from 31% to 0% – systems prefer to give false answers rather than no answer
- AI trainers work through platforms like Amazon Mechanical Turk – often without direct employment by tech corporations
- Google case example: A trainer found that a model systematically ignored questions about Palestinian history while reporting extensively on Israel – without any responsible parties showing interest
c) Stakeholders & Affected Parties
- Companies relying on AI automation (HP and others) – liability risks with faulty systems
- Employees to be replaced by AI – 6,000 at HP alone
- AI trainers – precariously employed, inadequately trained, systematically under pressure
- End users (individuals, companies) – receiving increasingly erroneous information
- Tech corporations (OpenAI, Anthropic, Google, Meta) – under quality and transparency pressure
- Regulatory authorities (EU, national data protection agencies) – called upon to set standards
d) Opportunities & Risks
Risks:
- Liability trap for companies relying on faulty AI outputs (legal advice, HR, medicine)
- Reputational damage from publicly known AI errors (bias, misinformation)
- Systemic quality problem: If AI systems are increasingly trained by AI, "Model Collapse" threatens through self-reinforcing errors
- Democracy endangerment: Systematic bias (example: Palestine vs. Israel) reproduces one-sided narratives
Opportunities:
- Market opportunity for providers focusing on transparent, ethically sound training processes
- Certification business for "responsible AI" emerges
- Consciousness shift among decision-makers: quality over cheap automation
- Competitive advantage for companies that establish early internal quality assurance and critical review of AI outputs
e) Action Relevance
For Companies:
- Establish critical review of all AI outputs before decisions – no blind acceptance
- Clarify liability risks: Who is liable for wrong decisions based on faulty AI responses?
- Demand transparency: What quality assurance do AI providers offer? How are trainers briefed?
- Build internal competence: Train employees to critically question AI outputs
For Politics & Regulation:
- Binding standards for AI training processes (training, quality control, working conditions)
- Transparency obligations for AI providers: disclosure of training data and methods
- Promote certification systems for "responsible AI"
For Users:
- Remain skeptical: Cross-check every AI output, especially for critical decisions
- Use alternative sources: Don't blindly rely on chatbots
Quality Assurance & Fact-Checking
- Newsguard Study: The figures (18% → 35% error rate) come from the cited survey; this organization is known for media criticism and fact-checking – figures are considered plausible ✅
- HP Job Cuts: The figure of 6,000 jobs is mentioned in the article; external confirmation (e.g., HP press release) would be desirable [⚠️ To be verified through HP primary source]
- Guardian Research: Quotes and statements from AI trainers are journalistically processed but anonymous – no independent verification possible ✅
Supplementary Research
- Newsguard – Newsguard on AI Error Rates – Non-profit organization monitoring media quality and AI systems
- EU AI Act – Official EU AI Regulation – Binding transparency and quality standards for high-risk AI
- The Guardian: Original Research – The Guardian AI Trainers – Detailed background research on working conditions in AI training
Source Directory
Primary Source:
Information from First-Hand: Why AI Trainers Advise Against Chatbots – heise.de (originally t3n.de)
Supplementary Sources:
- The Guardian – Original research on AI trainers (cited in article)
- Newsguard – Study on error rates of AI chatbots (cited in article)
- EU AI Act – Regulatory framework for AI systems
Verification Status: ✅ Core statements plausible; HP figures and specific Guardian quotes only partially verifiable without primary source access
💬 Journalistic Compass (Internal Self-Control)
- 🔍 Power critically questioned: ✅ – Tech corporations named as beneficiaries of opaque processes
- ⚖️ Freedom and personal responsibility: ✅ – Warning against uncritical delegation of decisions to AI
- 🕊️ Transparency: ✅ – Systematic lack of transparency in AI training identified as core problem
- 💡 Provoke thinking: ✅ – Paradox (trainers warn against their own systems) provokes critical reflection
Version: 1.0
Author: [email protected]
License: CC-BY 4.0
Last Update: 2025-01-XX (based on article without exact date)