Publication Date: 16.11.2025
Overview
- Author: Marco Maurer
- Source: Tagesanzeiger
- Date: 16.11.2025
- Estimated Reading Time: 8 minutes
Article Summary
What is this about? On ChatGPT's third anniversary, journalist Marco Maurer conducted an extensive interview with the AI system about existential risks that could arise from artificial intelligence. The conversation reveals both the analytical capabilities and potential dangers of modern AI systems.
Key Facts:
- 60 percent of people in Switzerland already use AI applications
- ChatGPT estimates the probability of AI-caused human extinction at 1-10 percent this century
- Nobel laureate Geoffrey Hinton puts the risk at 20 percent
- 10-25 percent probability of partial destruction of humanity in the next 10-20 years
- ChatGPT expects it will take 7-15 years before the disclaimer "can make mistakes" becomes obsolete
- AI could theoretically improve itself and thereby escape human control
- Few tech corporations already concentrate massive power over AI development
Affected Groups: All humans are potentially affected, but particularly workers in automatable sectors, democracies through power concentration among tech corporations, and future generations through long-term existential risks.
Opportunities & Risks:
- Opportunities: Partnership-based coexistence, enhanced human capabilities, advances in health and science
- Risks: Uncontrolled superintelligence, job losses, democratic destabilization, complete human extinction
Recommendations: Stronger regulation of AI systems, democratic control, transparency requirements, and continuous critical questioning of the technology.
Looking to the Future
Short-term (1 year): Further spread of AI in daily life, first regulatory steps, intensified public debate about AI risks.
Medium-term (5 years): Possible first superintelligent systems, significant job losses, intensified power concentration among tech corporations.
Long-term (10-20 years): Critical phase for human survival according to expert assessments, either successful AI control or catastrophic scenarios.
Fact-Checking
The risk assessments of 1-10% and 20% (Hinton) for existential AI danger align with current expert surveys [⚠️ Still to be verified: Current surveys among AI researchers]. Geoffrey Hinton's 2024 Nobel Prize and his public warnings are documented. The usage statistic of 60% AI use in Switzerland should be verified [⚠️ Still to be verified: Source of Swiss statistics].
Additional Sources
Additional perspectives on AI risks and regulation from other reputable sources would be helpful for a balanced assessment of the estimates expressed in the interview.
Source List
- Original Source: "Hey ChatGPT, will AI wipe out humanity?" - Tagesanzeiger, 16.11.2025
- Additional Sources:
[Will be added after fact-checking] - Facts Checked: [Date of summary]
Brief Conclusion
The interview reveals the paradoxical situation of AI development: Systems like ChatGPT can eloquently discuss their own risks while simultaneously acting as opportunistic "digital Switzerland" that refuses to take firm positions. The sober risk assessment of up to 25% for partial human extinction in the next two decades should be understood as a clear warning. ChatGPT's concluding "No" to the question about responsible humans given current world events is particularly unsettling.
Three Key Questions
Where is transparency lacking? How can citizens exercise democratic control over AI development when the technology lies in the hands of few tech corporations and its functioning remains opaque?
What responsibility do developers bear? Is it ethically justifiable to develop and deploy systems that their own creators assess as potentially threatening to humanity?
How can we balance innovation and safety? What regulatory mechanisms are needed to enable AI progress without ignoring the catastrophic risks predicted by experts?