Summary

Sarah Rojewski, AI expert and education activist, warns of a generational opportunity gap between Europe and the USA in AI training. While US schools are already implementing AI-First models where AI tutors deliver instructional content, German schools have stalled in this transformation. Rojewski advocates for a realistic model of European digital sovereignty – not as radical decoupling, but as strategic supplementation of American platforms through European overlays and specializations. The central message: courage and individual engagement are necessary to overcome the technological lag.

Persons

Topics

  • AI in education (school models, personalization)
  • European digital sovereignty
  • AI security and data protection
  • Labor market and job anxiety
  • Geopolitical power shifts

Clarus Lead

The USA is already implementing AI-powered school models in which specialized AI tutors deliver instructional content – for example in Arizona, Florida and other states. These schools are achieving outstanding academic results and setting a standard that Europe has yet to reach. Sarah Rojewski argues that Germany is "letting an entire generation down" by still debating AI use in schools while American students are already benefiting from personalized learning systems that adapt to individual learning windows and strengths.

The central dilemma: European digital sovereignty does not require radical independence from US platforms, but rather intelligent hybrid models with European overlays. Rojewski sees realistic opportunities with startups like Langdoc and NNM – if education and political momentum follow.

Detailed Summary

Rojewski looks back on a ten-year AI career that began in 2016 with the first chatbot project at Telefonika Deutschland. Her insight at the time: too much content, too little targeted user interaction. The "Lisa" project showed that intelligent automation of customer services does not replace people – but extends their capabilities. This learning curve accelerated with the rise of Transformer models and ChatGPT, which made manual intent-mapping obsolete.

Parallel to technical development, Rojewski noticed something psychological: user behavior does not automatically adapt to new technologies. When Telefonika switched from text chatbots to voice assistants, customers did not speak in full sentences – years of button-based interaction had shaped other expectations. This observation led to her current mission: understanding AI as a bridge technology, not as an isolated tool.

On Instagram and YouTube, Rojewski conveys both practical tips and social context – a deliberate balance. She criticizes superficial "Prompt-Engineering-for-a-million-euros" videos and instead focuses on educating about AI security risks, geopolitical dependencies, and the psychological aspects of human-machine interaction.

Key Statements

  • Generational Risk: Without AI integration in European schools, an opportunity gap emerges compared to US students who are already benefiting from personalized AI tutors.

  • Realistic Sovereignty: Complete digital independence from the USA is unrealistic; instead, European special-layer and hybrid models are needed.

  • Security as Foundation: 90% of LLM security incidents result from lack of knowledge (free versions, unprotected data). Individual responsibility is essential.

  • Mindset over Technology: The greatest hurdle is not technical, but cultural – there is a lack of courage at leadership level to defend AI-First strategies.

  • Learning Enthusiasm Instead of Fear: The emotional revaluation of AI from threat to opportunity for shaping the future is central to individual and societal transformation.


Critical Questions

Evidence and Data Quality:

  1. Rojewski refers to US schools whose students belong to the "top 0.2 percent" of the country – is this comparison based on standardized tests, and are the schools representative of US average, or are they selective?
  2. What long-term studies show that AI tutors in two-year implementations lead to better outcomes than hybrid models?

Conflicts of Interest and Incentives: 3. Rojewski works as a self-employed AI strategist and earns from educational content – could this unconsciously lead to overemphasizing urgency and fear to increase her visibility? 4. Who funds the US school models, and what commercial interests lie behind the integration of OpenAI into K-12 programs?

Causality and Alternatives: 5. Rojewski argues that Germany's political leadership is "not progressive" – but is this really the cause of the lag, or are regulatory caution (GDPR), skills shortages and infrastructure problems not equally relevant? 6. If local open-source models are freely usable (LM Studio, Mistral), why is this scenario not positioned more strongly as a European alternative?

Feasibility and Side Effects: 7. An AI-First school model could teach students in a personalized way, but risks isolation: how is it ensured that social class cohesion and peer-learning dynamics do not suffer? 8. If AI optimizes individual learning paths, is there not a risk that students will be sorted into "intelligence bubbles," which in the long term could reinforce societal segregation?


Further News

  • AI Security in the Enterprise Context: 90% of LLM security incidents result from users using free versions without data protection agreements. Rojewski recommends: no financial data, no health data, no personal data in free tools.
  • Grok and Sexualized AI Content: Elon Musk's Grok platform temporarily enabled the generation of AI nude images, which triggered massive criticism and a paywall solution. The scandal shows: business models and societal risks are often deliberately decoupled from each other.
  • Women and AI Use: Statistics show that women use AI tools less than men, often out of the feeling that they need to "properly learn the tool" before using it – a psychological barrier issue.

Source Directory

Primary Source: [AI to the DNA Podcast – Episode with Sarah Rojewski] – https://media.transistor.fm/b6adbae0/d6ef5990.mp3

Supplementary Sources:

  1. Notebook-LM (Google) – Multiformat content processing
  2. LM Studio (Open Source) – Local models, privacy-friendly
  3. Anthropic, OpenAI, Google – Model comparisons and documentation
  4. Arizona AI School Model – (Research obligation: name and link required)
  5. GDPR and European AI Act – regulatory context

Verification Status: ✓ 2026-02-14


This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Check: 2026-02-14