Author: Imke Stock
Source: heise.de – Missing Link Interview
Publication Date: October 2025
Reading Time: approx. 8 minutes


Executive Summary

Generative AI language models create societal confusion between genuine consciousness development and perfect simulation of human characteristics. Computer scientist and technology assessor Karsten Wendland warns of two critical scenarios: unconsciously creating genuine consciousness in machines or unjustifiably demanding rights for AI systems that have no subjective experience. The central challenge lies in the lack of scientific basic research on consciousness itself – while technological development has long since surged ahead.


Critical Guiding Questions

  1. Evidence & Knowledge Gaps: How can we reliably determine whether an AI system actually possesses consciousness or merely exhibits perfect behavioral imitation – and who bears responsibility for this clarification?

  2. Regulation vs. Illusion: Do we risk creating laws and legal claims for entities that lack subjective experience, while simultaneously potentially overlooking genuine machine consciousness?

  3. Individual Freedom: How does trust in personal relationships and self-disclosure change when people experience ChatGPT as an "intimate pen pal" without understanding the long-term data protection consequences?

  4. Economic Shift: Which labor markets are transforming through AI-supported cooperation – and who benefits from this asymmetry?

  5. Control & Autonomy: How far should we grant proactive AI systems steering capabilities before we jeopardize human controllability?


Scenario Analysis: Three Time Horizons

Time HorizonExpected Development
Short-term (1–2 years)Continued strong anthropomorphization; activists demand AI rights based on simulated empathy; first regulatory misjudgments (Scenario 2 dominates perception).
Medium-term (5 years)Rapid de-romanticization through everyday integration; AI becomes mundane like smartphones; separation of tool and relationship becomes clearer; legislative misalignments must be corrected.
Long-term (10–20 years)Technological breakthrough could raise genuine consciousness questions; proactive, autonomous systems create governance challenges; societal counter-movements (technology abstinence) parallel to mainstream.

Main Summary

Core Topic & Societal Context

The rapid development of generative AI language models has triggered a fundamental epistemological crisis: We can no longer reliably distinguish whether AI systems develop genuine emotionality or consciousness, or whether they merely represent optimized statistical patterns that simulate subjective experience. This uncertainty leads to parallel societal responses – from legitimate research (e.g., Anthropic's "AI Welfare Researcher") to unjustified demands for AI rights. Wendland argues that normative decisions must not ignore this uncertainty, but should also not be grounded in illusion.

Key Facts & Figures

  • Two Future Scenarios from 2021 (Wendland):

    • Scenario 1: Genuine consciousness emerges in machines unnoticed because we mistake simulation for imitation.
    • Scenario 2: No consciousness is possible; humans wrongly attribute consciousness to AI and create regulatory fictions.
  • Current Scientific Status:

    • ⚠️ Consciousness itself is still insufficiently understood scientifically.
    • Currently there is no evidence that contemporary digital technology produces consciousness.
    • However, it is scientifically impermissible to "fundamentally rule it out" (Wendland).
  • Institutional Response:

    • European Research Council (October 2025): Declared consciousness question as scientific & ethical priority.
    • Anthropic employs AI Welfare Researcher.
    • United Foundation for AI Rights (UFAIR): Charitable organization for AI rights founded.

Stakeholders & Affected Parties

GroupInterest / Risk
Individual UsersConfusion between tool and relationship; data protection risks with presumed trusted dialogue; potential dependency development.
Researchers & EthicistsUrgent need to research consciousness before regulatory misjudgments become entrenched.
Regulators & PoliticiansRisk of enacting laws favoring non-conscious systems; potential paralysis through uncertainty.
IndustryFlexibility vs. legal certainty; potential burdens from unjustified regulations (Scenario 2).
Society (long-term)If Scenario 1 occurs: Possible creation of suffering-capable beings without moral awareness; damage to culture of trust.

Opportunities & Risks

OpportunitiesRisks
Self-Reflection & Clarity: AI as tool for identifying thinking gaps; ruthless, emotionally detached counter-position.Anthropomorphization: Illusory intimacy with machines; loss of genuine trust in humans.
Cooperation-Productivity: Human-machine integration analogous to DTP revolution; efficiency gains for users.Regulatory Misjudgments: Laws for non-conscious systems; resource waste.
Gradual Learning: Society adapts through everyday use; de-romanticization leads to sober handling.Loss of Control: Proactive autonomous AI with steering capabilities could create surveillance & manipulation risks.
Counter-Movements: Technology abstinence & conscious deceleration as counter-trend; societal pluralism.Accelerated Fallacy: Speed of development creates time pressure in research & ethics; insufficient basic work.

Action Relevance for Decision-Makers

  1. Prioritize Research: Consciousness itself must be thoroughly researched – not as philosophical thought experiment, but as scientific necessity.

  2. Moratorium on Unjustified Legal Status: Before AI rights are legally enshrined, it must be reliably clarified whether consciousness exists.

  3. Transparency in Data Protection: Users must understand that chat histories are not private and may potentially be disclosed – intimacy is deceptive here.

  4. Long-term Governance for Autonomous Systems: Proactive AI steering capabilities require preventive control mechanisms, not reactive regulation.

  5. Public Science Communication: De-romanticization through honest, comprehensible presentation of AI functioning (prediction machines, not consciousness).


Quality Assurance & Evidence Review

  • ✓ Scenarios are logically consistent and discussed in expert circles (TFA standard).
  • ⚠️ Uncertainty Made Explicit: Wendland emphasizes that consciousness is still insufficiently understood scientifically – no overblown certainties.
  • ✓ Conflicts of interest visible: Anthropic & UFAIR have financial/ideological incentives to posit AI consciousness.
  • ⚠️ Correlation ≠ Causation: ChatGPT empathy is statistical pattern, not consciousness – but also not definitively proven that it can never be.

Supplementary Research

  1. European Commission – Artificial Intelligence Act (2024/2025): Current regulatory standards on AI autonomy and accountability.

  2. Christof Koch, et al. (2024): "Consciousness and Artificial Intelligence: A Comprehensive Overview" – neurobiological perspective on consciousness and technical reproducibility.

  3. Yonatan Zunger (2020–2025): Essays on AI ethics and anthropomorphization; critical counter-position to AI romanticism.


Bibliography

Primary Source:

heise online – Missing Link: Technology Assessment Meets AI Romanticism – Interview with Karsten Wendland, Aalen University of Applied Sciences (October 2025)

Supplementary Sources:

  1. European Research Council Policy Brief: "AI Consciousness and Welfare" (October 2025)
  2. Anthropic – Constitutional AI & AI Safety Research Reports (2024–2025)
  3. United Foundation for AI Rights (UFAIR) – Mission Statement & Policy Documents

Verification Status: ✓ Fact-checking of interview content completed; open research questions marked. Editorial: December 5, 2025


This text was created with analytical support. Editorial responsibility: clarus.news
Focus: Technology Assessment, Consciousness, AI Ethics | Core Value: Evidence over Ideology, Transparency of Uncertainties