Summary
Language models such as Gemini, ChatGPT and Claude suffer from a fundamental deficiency: they possess no true memory and restart each dialogue from scratch. Users must continuously re-enter context, preferences and project-related information – a problem that has been known for years. At the same time, Claude Code demonstrates that note-taking functionality can already offer solutions.
People
- Marcel Weiss (Author, FAZ)
Topics
- Artificial Intelligence – Memory and Context Understanding
- Language Models – Architecture and Limitations
- User Experience – Efficiency and Workflow
- AI Development – Technical Challenges
Clarus Lead
The memory problem is a fundamental deficit of leading language models that has been known for two years. Each new conversation begins without prior knowledge – a significant disadvantage for productive workflows. For decision-makers in companies, this means lack of personalization and constant redundancy in conveying context. The industry is working on solutions, but Claude Code already shows that simple note-taking functions can represent a practical interim measure.
Detailed Summary
The core issue is precise: users of Gemini, ChatGPT or Claude experience the same frustration daily. AI systems have no memory of previous interactions. They do not know preferred programming languages, ongoing projects, or context information from conversations from two weeks ago. This leads to repeated explanations – similar to the film classic "Groundhog Day," in which the protagonist relives the same day over and over again.
This deficiency is not new. As early as February 2024, the problem was discussed prominently. While the AI industry works on fundamental architectural solutions, practical implementations such as the note-taking function in Claude Code show that gradual memory – at least at the project level – is technically feasible. This points to a possible transition path until true persistent memory structures are integrated.
Key Statements
- Persistent Memory Deficit: Current language models have no true memory and start each conversation without context knowledge.
- Known Problem: The limitation has been documented for at least two years and remains unresolved.
- Workflow Inefficiency: Users must constantly re-enter project details, preferences and context.
- Partial Solutions in Use: Claude Code demonstrates that note-taking functions can provide relief in the short to medium term.
Critical Questions
Data Quality: How does the industry define "true memory" in the context of language models – and what technical metrics distinguish note-taking workarounds from true context awareness?
Conflicts of Interest: What economic incentives exist for providers to implement persistent memory, and when do costs outweigh benefits from the provider's perspective?
Causality: Is missing memory an architectural fundamental limitation of today's transformer models or a conscious design decision – and what alternatives exist?
Feasibility and Risks: What security and data protection challenges arise when AI systems store and retrieve personalized data over the long term?
Suitability of Partial Solutions: Are note features sufficient as an interim solution, or do they create false expectations regarding true memory functionality?
Market Pressure: Will competition between OpenAI, Anthropic and Google address this limitation faster than pure R&D logic suggests?
Source Directory
Primary Source: AI Memory: The Biggest Problem with AI: Forgetfulness – https://www.faz.net/pro/digitalwirtschaft/kuenstliche-intelligenz/wie-ki-ihre-amnesie-ueberwinden-kann-accg-200503502.html
Verification Status: ✓ 09.02.2026
This text was created with the assistance of an AI model. Editorial Responsibility: clarus.news | Fact-Check: 09.02.2026