ChatGPT is not a search engine – AI expert warns against overestimation

Publication date: 20.11.2025

Article overview

Author: Katharina Zweig (Computer Science Professor, TU Kaiserslautern)
Interview conducted by: Patrick Schlereth
Published: FAZ, 20.11.2025, 09:01
Estimated reading time: 10 minutes
Format: FAZ+ Interview (paywall)

Note: The provided article text is incomplete – only introduction and first question are available

Summary – Key points

  • Core thesis: AI language models like ChatGPT should not be used as search engines
  • Main problem: AI cannot distinguish between what it really "knows" and what it doesn't
  • Warning: Zweig sees a major AI bubble emerging
  • Limitations: Current language models have fundamental constraints
  • Usage: Users handle AI tools too carelessly
  • Self-reflection: Unlike Socrates' "I know that I know nothing," AI lacks this insight
  • Imitation: AI imitates human responses without genuine understanding

[⚠️ Complete analysis still to be verified due to incomplete article text]

Opportunities & Risks

Opportunities:

  • Raising awareness for responsible AI usage
  • Critical discussion about AI limitations can lead to better regulation
  • Research toward "more honest" AI systems

Risks:

  • Users overestimating current AI capabilities
  • Emerging investment bubble in the AI sector
  • Poor decisions due to trust in unreliable AI responses

Looking ahead

Short-term (1 year): Growing criticism of current language models, initial regulatory approaches

Medium-term (5 years): Possible correction of AI euphoria, development of more "self-reflective" AI systems

Long-term (10-20 years): According to the introduction, Zweig "has an idea what comes next" – details in the full interview

Fact check

Well documented:

  • Zweig's expertise as computer science professor at TU Kaiserslautern
  • Fundamental problems of language models in knowledge differentiation

To be verified:

  • [⚠️ Still to be verified] Concrete examples of AI bubble formation
  • [⚠️ Still to be verified] Zweig's specific future predictions
  • [⚠️ Still to be verified] Detailed arguments about language model limitations

Brief conclusion

Computer scientist Katharina Zweig warns against dangerous overestimation of current AI technology and careless handling of language models like ChatGPT. Her main criticism: These tools cannot distinguish between verified knowledge and speculation and should therefore not be used as reliable information sources. The warning about an AI bubble comes at a time when euphoria around generative AI appears to be reaching its peak.

Three critical questions

  1. Transparency problem: If even AI experts warn against careless usage – why don't tech companies communicate the limitations of their systems more clearly?

  2. Responsibility for misinformation: Who bears responsibility when people make wrong decisions based on unreliable AI responses – especially in critical areas like health or finance?

  3. Innovation vs. hype: Does the current AI bubble hinder genuine technological innovation by channeling resources into overestimated technologies instead of fundamental research?