Summary
The PhD economist Zoë Hitzig has left OpenAI and is publicly campaigning against the introduction of advertising on ChatGPT. She warns of a "digital dystopia" in which corporations exploit users' private thoughts for advertising optimization. Hitzig sees parallels to the mistakes of the early social media era and calls for alternative business models and stricter AI regulation following the German example.
Persons
- Zoë Hitzig (Economist, former OpenAI researcher)
- Sam Altman (CEO OpenAI)
Topics
- AI ethics and corporate responsibility
- Data usage and user rights
- Business models in the AI industry
- Regulatory questions
Clarus Lead
An economist who joined OpenAI in 2024 hoping to avoid the mistakes of Facebook has left the company. Her reason: The company is planning to introduce advertising on ChatGPT—a move Hitzig interprets as conceptual bankruptcy. The conflict reveals fundamental tensions in the AI industry between growth pressure and ethical aspirations.
Detailed Summary
Hitzig emphasizes that chatbots have fundamental differences from classical social media platforms. Users do not merely share their actions but entrust the systems with their most intimate thoughts and fears—as if to a "confessional." Combined with advertising optimization incentives designed to maximize usage time, an unprecedented manipulation potential emerges. She already observes early warning signs: addictive behavior, mental health disorders, and even diagnosed "chatbot psychoses" as well as cases involving suicidal ideation.
Hitzig criticizes not primarily individual people but flawed incentive structures. OpenAI has evolved from its security-oriented founding vision to "the fastest-growing startup in history." ChatGPT's success with 900 million users has created "massive temptations." Additionally, international competition—particularly with China—is exerting pressure on the industry.
As solution alternatives, Hitzig proposes cross-subsidization: premium subscribers pay higher fees to enable free access for others. She is also inspired by Germany's codetermination model, which gives employees influence. Users could be represented in supervisory boards and democratically decide on data protection policies. However, she emphasizes: with 900 million users and a planned IPO, OpenAI is unlikely to accept such models.
Hitzig sees regulation as essential: independent security audits, age restrictions for minors (similar to new social media laws), and specialized AI models for under 16-year-olds. Her greatest concern remains the concentration of economic power among a few companies that have access to the private data of all people.
Key Statements
- Advertising in chatbots is qualitatively more dangerous than in social media because AI systems have access to thoughts and fears rather than just behavioral data
- Incentive structures force well-intentioned employees to make harmful decisions—individuals are not solely to blame
- Alternative business models are possible but require creative solutions and genuine user participation
- Regulation following the German example with independent audits and age restrictions is necessary
- Two years after ChatGPT's breakthrough, psychological damage is evident, which Hitzig fears especially for children
Critical Questions
Evidence: What specific studies or data does Hitzig base her warning about "chatbot psychoses" and suicidal ideation on? Are these isolated cases or systematic findings?
Evidence: How representative are the "early signals" of addictive behavior? Are there comparable figures for social media addiction rates?
Conflicts of Interest: Could Hitzig's public criticism also serve as a repositioning strategy for her next career move—for example, to increase her academic or consulting visibility?
Causality: Does the mere introduction of advertising actually prove harmful effects, or are existing mental health problems correlations without causal connection?
Alternatives: Why have cross-subsidization models not been widely adopted in the broader social media industry—is it only due to unwillingness or also due to economic infeasibility?
Feasibility: How realistic is it to technically and administratively implement user supervisory boards for globally distributed 900 million users?
Feasibility: Would age restrictions on AI systems not simply lead to workarounds without addressing the actual risks?
Risks: Could overregulation of ChatGPT-like systems weaken innovation capacity and thereby give China a competitive advantage?
Source Directory
Primary Source: [EX-Employee of OpenAI: "Chatbots are something like a confessional"] – https://www.faz.net/aktuell/wirtschaft/unternehmen/zoe-hitzig-ehemals-openai-chatbots-sind-wie-ein-beichtstuhl-accg-110843252.html (FAZ, 27.02.2026, Roland Lindner)
Verification Status: ✓ 27.02.2026
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Checking: 27.02.2026