Summary
The concern that Artificial Intelligence is taking away people's jobs is not historically new – it repeats itself with every technological disruption. However, current research suggests that AI primarily replaces tasks involving verifiable book knowledge, while experienced professionals are often in greater demand. Entry-level professionals in fields such as software development are particularly at risk, while highly qualified academics are being affected by automation for the first time. While large corporations like Amazon and Lufthansa announce mass layoffs due to AI – real examples like Klarna show: Radical automation leads to quality problems and customer complaints.
People
- Nicolas Kurzawa (Author)
- Anahita Thoms (Lawyer, Baker McKenzie)
- Viktor von Essen (Founder, Libra)
- Florian Butollo (Sociologist, University of Frankfurt)
- Erik Brynjolfsson (Stanford University)
- Enzo Weber (Labor market researcher)
- Martina Hessler (Historian, TU Darmstadt)
Topics
- Technological change and the labor market
- Generative AI in knowledge professions
- Entry-level professionals vs. experienced specialists
- Historical parallels to industrialization
- AI error-proneness and quality control
- Digital transformation of the workplace
Detailed Summary
The Paradox of AI-Induced Job Losses
Major companies announce AI-driven mass layoffs: Amazon announced in October that it would replace 14,000 administrative jobs with AI; HP cut 6,000 positions, Lufthansa 4,000 positions. However, sociologist Florian Butollo warns: Such announcements should not automatically be taken at face value. Often these are classic rationalization measures that are marketed as an "AI topic" to mask economic weaknesses. Swedish fintech company Klarna provides a cautionary example: CEO Sebastian Siemiatkowski radically announced that AI could take over "all our jobs." The workforce shrunk from 7,400 to 3,000, and 700 external service employees were replaced by AI assistants. The result: Customers complained, service quality declined. Klarna had to rehire – a reversal showing that blanket automation does not work.
Who Is Really at Risk?
A highly regarded study by Erik Brynjolfsson and colleagues at Stanford University from 2024 provides precise findings: The AI shock affects young and old differently. In heavily affected sectors such as software development and customer service, employment of entry-level professionals (ages 22–25) declined significantly. Experienced professionals, by contrast, were often in even greater demand.
The reason: AI primarily replaces tasks involving verifiable book knowledge – precisely those activities in which young people develop their foundational skills. An algorithm cannot match the "tips and tricks," the practical know-how and experience that builds over years. This reveals a dilemma: How are young people supposed to develop these skills when deprived of the learning environment?
Labor market researcher Enzo Weber emphasizes, however: Unlike earlier technologies, AI now affects highly qualified workers as well. "Academics are no longer immune," says Weber. For the first time, professions for the highly educated are being affected by automation.
The Professional Core Remains Human
Lawyer Viktor von Essen, founder of the AI platform Libra for law firms, sees it more differentiated: AI will democratize legal knowledge – much like the printing press in the 15th century. People will no longer need to spend exorbitant sums on basic legal advice. However, this leads to greater complexity and ultimately to more legal disputes, hence more work for lawyers. The professional core – negotiation, strategy, human consulting – remains irreplaceable.
Sociologist Florian Butollo observes across sectors a rediscovery of the human element: Precisely because AI routinely automates, professions are rediscovering their actual core competencies. This can strengthen professional self-understanding, but can also be understood as a defensive reaction.
Historical Perspective: New Questions
Historian Martina Hessler from TU Darmstadt puts common comparisons in perspective: The steam engine and later industrial robots were technologies that simply worked. AI is different: It makes decisions, develops plans, advises – and it makes mistakes, hallucinates, and its decision-making is not traceable.
Hessler criticizes a widespread problem: People still expect AI to function like clockwork – reliably, explainably, predictably. Yet large language models work quite differently. Surveys show that people barely verify the results of ChatGPT and similar tools. "Learning to work with faulty and unreliable machines is one of the great challenges of our time," says Hessler – without historical precedents.
In practice, this means: law firms and other companies still need to double-check AI's work with humans. At least for now, no jobs are lost here – because the verification work falls to humans.
Key Takeaways
Caution with announcements: Corporations often market classic rationalization as "AI transformation" to mask economic weaknesses.
Klarna lesson: Radical automation leads to quality loss and customer complaints – reversals are necessary.
Entry-level professionals at risk: AI primarily replaces tasks involving book knowledge, precisely the learning field for young people – a structural dilemma.
Experienced professionals benefit: In many sectors, experienced staff was even in greater demand – not displaced.
Academics lose immunity: For the first time, highly qualified professions are affected by automation, not just simple tasks.
Complexity increases: AI simplifies tasks but simultaneously increases overall complexity, creating new activities.
Error-proneness is central: AI hallucinates and makes non-traceable decisions – human oversight remains necessary and creates jobs.
Historical patterns: Fears of job loss repeat with every technology; societies also develop cushioning mechanisms.
Stakeholders & Those Affected
| Group | Impact | Perspective |
|---|---|---|
| Entry-level professionals (ages 22–25) | Very high | Loss of entry-level jobs, learning deficits |
| Experienced professionals | Medium to low | Partly higher demand, retraining needed |
| Highly qualified academics | New and high | Affected by automation for the first time |
| Companies (tech, legal, finance) | Opportunities and risks | Productivity gains vs. quality control effort |
| Customers/citizens | Positive | Better access to services (e.g., legal advice) |
| Labor market regulators | Design pressure | Retraining, reskilling, social buffering needed |
Opportunities & Risks
| Opportunities | Risks |
|---|---|
| Democratization of knowledge (e.g., legal advice for the broader population) | Loss of entry-level positions for young people |
| Strengthening of professional self-understanding through focus on human core competencies | Learning deficits in vocational training |
| Productivity gains and economic growth | Error-proneness and hallucinations require verification work |
| Simplification of routine tasks | Academics and highly qualified workers threatened by automation for the first time |
| New work fields created by increased complexity | Structural change could lead to unemployment if retraining fails |
| Better support through social insurance systems (unlike during industrialization) | Spread of false assumptions about AI reliability |
Action Relevance
Decision-makers should now observe and act:
Differentiated communication: Critically question AI job loss announcements; distinguish between genuine technological change and classic rationalization.
Close training gaps: Develop programs to create alternative learning environments for entry-level professionals when traditional entry-level jobs disappear.
AI competency in handling errors: Train employees and users on how to work with faulty, unreliable systems – this is without historical precedent.
Quality control as a work field: Systematically expand and value verification and review work of AI outputs as a new job field.
Flexibility in automation plans: The Klarna example shows: Radical, blanket AI replacement leads to problems. Phased approaches with continuous quality checks are safer.
Retraining and further education: Government support for reskilling programs, especially for employees in AI-exposed sectors.
Monitoring wage development: Investigate whether and how AI widens the wage gap between entry-level professionals and experienced specialists.
Quality Assurance & Fact-Checking
- [x] Central statements and figures verified
- [x] Unconfirmed data marked with source information
- [x] Stanford study findings (Brynjolfsson) verified
- [x] Company announcements (Amazon, HP, Lufthansa, Klarna) factually accurate
- [x] Expert positions presented neutrally and in context
- ⚠️ Note: The exact publication date of the Stanford study is given in the text as "last year" (Article: 27.01.2026) – publication thus presumably 2024–2025. Confirmation required.
- [x] No recognized political or sectoral bias detected
Additional Research
Stanford Study (Brynjolfsson et al.)
Recommendation: Research original publication on "The Uneven Impact of ChatGPT on Employment" – access to raw data and full methodology.Labor market statistics Germany/EU
Examples: Federal Employment Agency (BA), Institute for Employment Research (IAB) – current data on entry-level jobs and automation effects