Executive Summary

In the AI podcast, Marie Kilg and Gregor Schmalzried discuss the architecture of an individual AI system as a "second brain". The problem with current chatbots: they don't learn continuously and forget context information between sessions. The solution consists of three components – a powerful language model (e.g. Claude Opus, GPT-4.5), a system infrastructure with skills for storing working methods, and a separate knowledge repository like Notion or Obsidian. Users can thus permanently integrate their individual requirements and workflows into the AI without having to enter new context information with each query.

People

  • Marie Kilg (/de/?search=Marie%20Kilg) (Moderator, AI Podcast)
  • Gregor Schmalzried (/de/?search=Gregor%20Schmalzried) (Moderator, AI Podcast)

Topics

  • Language models and continuous learning
  • Prompt engineering and skill architecture
  • Personal knowledge management
  • Data privacy and local alternatives

Clarus Lead

The central insight: Large language models are universally educated generalists – precisely what makes them powerful and simultaneously useless for highly personalized tasks. A "second brain" has become not optional for power users, but necessary to use AI productively at all. The key lies in deliberately providing context rather than teaching the AI anew with each question. This opens up practical applications – from automated podcast preparation to workflow optimization – that were previously impossible.


Detailed Summary

The Memory Problem and Existing Solutions

Language models are trained and then "frozen" – they don't update like human intelligence. Memory functions in commercial chatbots work like cheat sheets but are rigid and context-insensitive: they retrieve the wrong information at the wrong time. Custom instructions and predefined assistants (Custom GPTs, Gems) are too rigid for complex, adaptive tasks. The long-form chat model ("just leave everything in one chat") leads to context overload, higher costs, and worse performance.

The Three-Part Architecture

The foundation is a strong model (Claude Opus, GPT-4.5, Gemini 3.1). These base models offer universal competence; the choice is secondary. Around the model, a system infrastructure is then built – either Claude Projects, LangDoc, Microsoft Copilot, or Notion. Within this infrastructure, skills are defined: small, reusable instructions (e.g. "write in my style", "research current status", "structure in podcast format"). Skills are modular and can be combined; the model itself recognizes which skills are relevant for a task.

The third ingredient is a separate knowledge repository – Google Drive, Notion, Obsidian, or Markdown-based systems. This stores context data that doesn't belong in the AI system: personal notes, project documentation, historical data. The AI accesses this repository when needed, but doesn't permanently load everything into its context.

Practical Implementation and Data Privacy

A concrete example: podcast preparation. The user defines skills for "research on current trends", "comparison with previous episodes", and "structuring according to podcast format". The AI automatically calls upon these skills, researches in a knowledge database, structures results – without the user having to explain what they're doing each time.

For data privacy, users can employ a "two-brain strategy": one half is private (diaries, personal thoughts), the other is AI-accessible. Alternatively, local models can be run on personal hardware. Those with sensitive address books or GDPR-relevant data should mark them with a checkbox and exclude them from cloud-based AI.

Growth Without Overwhelm

The system should be built modularly and iteratively: start with a recurring task, write a skill for it, add more skills later. Too many skills or overly ambitious automation at the start leads to loss of motivation. The goal is to structure recurring, tedious tasks once and then let them run automatically – similar to packing for a vacation: you think through once what essentials you need instead of packing everything each time.


Core Statements

  • Memory is not learning: Existing chatbot memory functions are static and context-blind; a true "second brain" requires modular skills and separate knowledge repositories.
  • Context deliberately, not broadly: The AI should only receive information it needs for the current task – not the user's complete dataset.
  • Modular architecture scales: A stable system with base model, system infrastructure, and knowledge repository enables portability between AI platforms and remains robust when better models emerge.

Further News

Aleph Alpha taken over by Cohere: German AI company Aleph Alpha is being acquired by Canadian company Cohere. The deal is presented as a "merger," but is factually an acquisition of a weakening company by one ten times larger. Cohere aims thereby for better access to the German market and digital sovereignty from the USA.

New image generation model from OpenAI: OpenAI has released a new image model that clearly outperforms Google Gemini and other competitors. Test: A request for a "lo-fi band photo of a Swedish indie band" produces aesthetically authentic, reduced results in OpenAI; Gemini fills the image with visual elements and appears artificially overloaded.


Critical Questions

  1. Data Quality & Validation: How does a user ensure that skills function correctly and the AI retrieves the right information from the knowledge repository – especially as data grows over time?

  2. Conflicts of Interest & Incentives: If a user centralizes their entire knowledge management in a commercial cloud platform (Claude Projects, Notion), dependency arises. Are local, decentralized alternatives (Obsidian, LangDoc) economically viable for average users?

  3. Causality & Alternative Hypotheses: Is the lack of continuous language model updates a technical limitation or deliberately imposed design? Would on-device learning be more privacy-friendly than central knowledge repositories?

  4. Feasibility & Risks: How much time and technical understanding does an average user need to build a functional second brain – and when does system maintenance itself become overhead?

  5. Security & Contamination: When multiple skills interact with each other, how is it prevented that incorrect information from one skill "infects" others or that the AI misuses skills in combination?

  6. Portability & Vendor Lock-in: Can skills and knowledge repositories truly be transferred across platforms, or do hidden dependencies on proprietary formats emerge?


References

Primary Source: Kilg, Marie & Schmalzried, Gregor. "How to Build the Perfect AI System." The AI Podcast. ARD/Neuland. April 2026. https://media.neuland.br.de/file/2117682/c/feed/wie-baue-ich-mir-das-perfekte-ki-system.mp3

Supplementary Sources (mentioned):

  • Anthropic Claude Projects & Skills Documentation
  • LangDoc (German AI tool ecosystem)
  • Obsidian (local Markdown management)
  • Notion (cloud knowledge management)

Verification Status: ✓ 2026-04-28


This text was created with the support of an AI model. Editorial responsibility: clarus.news | Fact-checking: 2026-04-28