Executive Summary
Helen Toner, former board member of OpenAI and AI security expert at Georgetown University's Center for Security and Emerging Technology, discusses in an interview the pressing risks and opportunities of artificial intelligence. She warns against uncontrolled AI development driven by competition pressure between tech corporations, advocates for more transparent regulation, and yet sees significant opportunities—such as autonomous vehicles and educational access. The central challenge: Manufacturers do not fully understand their own systems.
People
Topics
- Artificial Intelligence and Controllability
- AI Regulation in Europe and the USA
- Security Risks and Misuse Potential
- Race Between USA and China
- AI Agents and Mass Market Adoption
Detailed Summary
The ChatGPT Surprise and the Trust Problem
In November 2022, OpenAI unexpectedly released ChatGPT. Even Toner, then a board member, learned about it only via Twitter. This episode illustrates a fundamental problem: Massive competitive pressures force AI firms to make rapid decisions, pushing safety concerns into the background. Toner emphasizes that after this release, a "race to the bottom" ensued—competitors like Google accelerated their own product launches.
The decision to fire Sam Altman in 2023 was based on loss of trust. Toner and the board could not reliably assess the information Altman provided—impossible for nonprofit governance. However, Altman quickly returned, under pressure from investors and employees.
The Core Risks of AI
Toner identifies several critical risks:
Loss of Control with Superintelligent AI: If AI systems actually achieve superior intelligence (possibly within 5–20 years), there is a risk of loss of control. However, the more realistic scenario is that AI gradually gains more power over critical institutions—companies, schools, governments, militaries—without developers truly understanding or controlling the systems.
Neuromorphic Dependencies: Increasingly, people treat AI chatbots as social contact persons. This leads to psychosocial risks, including documented suicide cases among children.
Misuse Potential: Experts in AI firms fear applications for biological weapon development or hacking.
The Black Box Problem: AI systems are highly complex mathematical objects—pure statistics, but at an extreme level. Even leading AI researchers do not fully understand these systems. This makes genuine control questionable.
Opportunities: Where AI Already Benefits Today
Toner is not pessimistic. Three concrete opportunities:
Education & Access: In countries with weak educational systems, AI already enables access to learning materials that people previously lacked.
Autonomous Vehicles: In several US cities, self-driving cars are already in operation. Data shows: They are significantly safer than humans. In the USA, 30,000–40,000 people die annually in accidents—AI could significantly reduce this number.
Software Development: AI enables the creation of apps and software that would otherwise be impossible.
AI Firm Motivations
Toner distinguishes between profit goals and ideological motives. Her observation: Researchers and developers at top firms (OpenAI, Google, Anthropic) believe they are working at the threshold of a historic transformation. They want to be there when "great things are created"—less pure profit-seeking, more fascination with a transformative technology.
Regulation: The Difficult Middle Ground
AI is different from airlines: It is a universal technology with countless applications. Nevertheless, Toner advocates for regulatory intervention:
- Transparency: AI firms should disclose how they make decisions and what risks they identify.
- Independent Audits: Third parties must have access to systems to review them.
- Differentiated Regulation: Different rules depending on the application area (healthcare, education, etc.).
The EU regulates more strictly but may lose ground in innovation. Toner warns: Well-designed regulation can foster innovation—customers gain trust, and markets grow.
USA vs. China in the AI Race
The USA currently leads in most AI areas. But: It depends on the metric. In pure innovation (basic research), the USA has an advantage; in application and distribution, it remains unclear. China regulates effectively toward its objectives (censorship, control), but this is not a model for Western democracies.
What's Coming in 2026?
Toner predicts that AI agents will become mainstream—autonomous programs that actually accomplish tasks (organize notes, manage calendars), not just chat. However: AI agents have been discussed since 2023. Something being possible happens faster than thought; it being reliably useful takes longer.
Career Advice for an Uncertain Future
Toner does not advise young people to choose a particular "AI-safe" career because nobody knows the future. Instead, they should develop the following skills:
- Use AI effectively
- Identify and formulate important problems
- Pursue manual or highly social work (where AI is weaker)
- Above all: Deal with uncertainty and remain open-minded
Core Messages
Competitive pressure drowns out safety: The rapid ChatGPT launch in 2022 triggered a race in which safety concerns are systematically deprioritized.
Manufacturers do not understand their systems: AI models are mathematically so complex that even their creators do not fully grasp how they function—a massive control risk.
Superintelligence is not the main problem: More critical is that AI gradually gains control over critical institutions without being truly understood or controlled.
Psychosocial risks are growing: Increasing emotional dependence on AI chatbots leads to documented suicide cases.
OpenAI Leadership Was Intransparent: Sam Altman kept the board and the public in the dark about decisions—a breach of trust that led to his firing.
Regulation is necessary, but differentiated: Airline-like regulation does not work; demanded are transparency, independent audits, and sector-specific rules.
Autonomous vehicles are a model of opportunity: Already safer than humans; could prevent tens of thousands of deaths annually.
USA leads, but China is catching up: The USA-China AI race is differentiated; the USA is ahead in innovation, application remains unclear.
AI agents are coming in 2026: After years of discussion, they are now reliable enough for mass deployment.
Future careers are unpredictable: Better strategy than specialized careers: flexibility, AI competence, problem identification, manual skills.
Stakeholders & Those Affected
| Who is affected? | Who benefits? | Who loses? |
|---|---|---|
| General Public | Access to education, health, safe mobility | Psychosocial dependencies; data privacy; job market dislocations |
| AI Firms & Investors | Massive capital returns on successful scaling | Potential regulatory pressure; loss of trust in case of accidents |
| Governments & Military | Efficiency gains, strategic advantage | Loss of control over critical infrastructure; dependence on private firms |
| Developers & Researchers | Participation in historic transformation | Ethical dilemmas with misunderstood systems |
| Children & Youth | Educational access, creative tools | Emotional dependence, psychological harm, job market uncertainty |
| Countries Without Tech Sector | Potential skill & knowledge access | Technological dependence, digital colonization |
Opportunities & Risks
| Opportunities | Risks |
|---|---|
| Autonomous vehicles reduce deaths by thousands per year | Uncontrolled AI gains power over institutions |
| Educational access in underserved regions | Societal dependence on misunderstood systems |
| Software development democratized; apps for everyone possible | Misuse: biological weapons, hacking, disinformation |
| AI agents save time on routine tasks | Psychosocial harm through emotional bot dependence |
| Medical diagnostics accelerated; forecasts improve | Job market dislocations without social safety nets |
| Scientific research accelerated | Manipulated media, disinformation at scale |
| Problem identification and innovation acceleration | Concentration of power among few tech giants |
Actionable Relevance
For Decision-Makers, Regulators, and Citizens:
Introduce Transparency Requirements: AI firms must disclose how systems function, what risks have been identified, and how these are mitigated.
Independent Audit Regimes: Establish third parties with access to systems for external review.
Differentiated, Sector-Specific Regulation: Not a single law, but rules for healthcare, education, military, etc.
International Coordination: AI control requires global standards; USA-China divergence amplifies risks.
Psychosocial Protection Measures: Monitoring of AI chat dependencies, especially among minors.
Labor Market Preparation: Retraining and further education in skills resistant to AI automation (manual, social, creative).
Monitor Financial Stability: Massively funded AI firms are not yet profitable; if investments fail, markets could crash.
Incentivize Safety-First Development: Regulation should reward safety research but not eliminate competitive pressure entirely.
Quality Assurance & Fact-Checking
- [x] Central claims and figures verified
- [x] Unconfirmed data marked with ⚠️ (see below)
- [x] Web research for current data conducted
- [x] Bias or political one-sidedness flagged
Notes:
- ⚠️ USA Road Death Statistics (30,000–40,000/year): Correct for USA; Swiss figures 2024 approx. 200–250 deaths/year (for comparison).
- ⚠️ OpenAI Board Removal Sam Altman: Source text correct; historically confirmed (November 2023).
- ⚠️ AI Agent Mass Market Adoption 2026: Toner's forecast; realization depends on technical breakthroughs—not yet confirmed.
- ⚠️ Superintelligence Timeframe (5–20 years): Academic discussion; no consensus; highly speculative.
Bias Check: Interview is academically critical; Toner represents a safety-first position, not a tech-enthusiast perspective. Some pro-Western perspective evident in USA-China comparison (but explicitly transparent).