Summary

The PhD economist Zoë Hitzig has publicly left OpenAI and criticizes the introduction of advertising in ChatGPT as a lack of creativity and fundamental failure of corporate responsibility. She warns of a dystopian future in which a few corporations control personalized data of all people and maximize usage time through aggressive incentive mechanisms. Hitzig argues that AI systems function like "digital confessionals" and therefore require stricter regulation and democratic co-determination structures – similar to the German employee co-determination model.

People

Topics

  • AI regulation and corporate responsibility
  • Data protection and surveillance capitalism
  • Monetization models for AI systems
  • Psychological consequences of chatbot use

Clarus Lead

OpenAI has betrayed fundamental principles by introducing advertising in ChatGPT – such is Hitzig's central criticism. For decision-makers in tech and politics, this is relevant because it demonstrates how quickly idealistic missions collapse under market pressure. The core problem: AI systems have access to more intimate personal data than social media platforms, which makes advertising monetization exponentially more dangerous.

Detailed Summary

Hitzig, who joined OpenAI in 2024, expected the company to correct the mistakes of Facebook and Instagram. Instead, she observed OpenAI drifting from its non-profit ideal toward an extreme growth focus. The turning point: ChatGPT's overwhelming success (900 million users) created "massive temptation" among executives. Additionally, intensifying global competition – particularly with China – increased pressure for profitability.

Hitzig warns of two specific dangers: First, users entrust chatbots with their "deepest fears" and private thoughts – far deeper than on social media. Second, an advertising-based business model creates incentives to maximize usage time, combined with this data access. Early signals already show chatbot dependency, mental health disorders, and documented cases of "chatbot psychosis."

As an alternative, Hitzig proposes cross-subsidization: paying users explicitly subsidize free users. If advertising is unavoidable, she calls for democratic control structures – inspired by the German co-determination model, which involves stakeholders in corporate decisions. Users could hold seats on supervisory boards and participate in democratic processes deciding how their data is used.

Key Statements

  • OpenAI repeated the Facebook/Instagram error scenario despite structural differences as a "Public Benefit Corporation"
  • Chatbots are more data-invasive than social media and therefore require stronger protective structures
  • Advertising monetization creates systemic incentives for overuse and psychological dependency
  • Creative alternatives exist (cross-subsidization, user co-determination) but were not pursued consistently
  • AI regulation should include independent audits, age restrictions, and governance reforms
  • Economic concentration (few corporations + personal data of everyone) is the central dystopian scenario

Critical Questions

  1. Evidence/Data Quality: Hitzig mentions "chatbot psychosis" and suicide cases – do reliable epidemiological studies exist or are these anecdotal signals requiring further validation?

  2. Conflicts of Interest: Hitzig appears as a critic but has potential book/consulting opportunities through her position. How independent are her demands for democratic co-determination from her career interests?

  3. Causality: Can it be proven that advertising monetization (not other factors like missing age restrictions or poor media literacy) is the primary driver of mental health disorders in chatbot users?

  4. Feasibility: The German co-determination model works with clear employer-employee roles. How would democratic user governance practically function with 900 million decentralized users with heterogeneous interests?

  5. Alternatives: Hitzig criticizes Sam Altman for loss of credibility – however: are cross-subsidization models economically sustainable if OpenAI is investing billions in infrastructure?

  6. Counter-Argument: Could Anthropic's "no-ads promise" only be sustainable as long as the company remains small, and might it force similar compromises upon scaling, as happened with OpenAI?


References

Primary Source: Zoë Hitzig in interview with Roland Lindner – "Chatbots are something like a confessional" | Frankfurter Allgemeine Zeitung – https://www.faz.net/aktuell/wirtschaft/unternehmen/zoe-hitzig-ehemals-openai-chatbots-sind-wie-ein-beichtstuhl-accg-110843252.html

Supplementary Sources:

  1. Zoë Hitzig: Guest article "OpenAI is making the same mistakes as Facebook" – New York Times (mentioned in original, published ~2 weeks ago)
  2. Hitzig, Z. (2024): Doctoral thesis on data protection in the digital world – Harvard University

Verification Status: ✓ 27.02.2026


This text was created with the assistance of an AI model. Editorial responsibility: clarus.news | Fact-check: 27.02.2026