Author: heise.de
Summary
Complaints to the Hessian Data Protection Commissioner doubled in 2025: instead of roughly 3,840 cases in the previous year, over 6,000 complaints were filed. The main reason is the growing use of AI chatbots like ChatGPT, which not only explain legal problems to citizens but automatically formulate complete complaint letters. While the authority welcomes the increased data protection awareness, significant overload threatens – particularly with Schufa inquiries, which have tripled.
People
- Alexander Rossnagel (State Data Protection Commissioner Hesse)
Topics
- Artificial Intelligence and Data Protection
- Administrative Overload
- Automated Legal Enforcement
- Schufa Complaints
Clarus Lead
The Hessian Data Protection Commissioner Alexander Rossnagel recorded a doubling of submissions to over 6,000 cases. The cause is not a sudden wave of data protection violations, but automated complaint generation by AI systems: users ask ChatGPT for advice, receive information about the data protection authority, and have a formal complaint letter generated directly. The authority now recognizes these cases by linguistic patterns and argumentative structures. For decision-makers in administration and regulation, this represents a new challenge: the mass flow of automatically generated legal remedies threatens the effectiveness of authorities, even though greater citizen participation is actually desired.
Detailed Summary
The doubling of complaint numbers within one year is extraordinary and is driven by a single phenomenon: AI-supported complaint generation. Rossnagel reports that chatbots actively recommend contacting the data protection authority – and, upon request, draft the complete complaint letter. This explains both the volume and the structural similarity of many submissions.
Paradoxically, the authority welcomes this trend in part: growing interest in data protection and the recognition that these rights can be enforced for free are positive. However, the practical reality is intensifying: with constant staffing levels, Schufa complaints, for example, have tripled. Even in the current year (January/February 2025), there is no sign of relief. Rossnagel implicitly warns of a blockade of authority work due to sheer increases in volume.
Key Points
- Doubling due to AI: complaint numbers rose from 3,840 to over 6,000 – primarily because of automated chatbot recommendations
- Recognizable Patterns: authorities can identify AI-generated complaints by language and structure
- Resource Mismatch: Schufa complaints tripled, staff constant – risk of overload
- Ambivalent Balance: more legal awareness is desirable, but automated mass floods threaten effectiveness
Critical Questions
Data Quality: How many of the 6,000 complaints are substantively justified versus generated by AI templates that may generalize or misunderstand legal issues?
Conflicts of Interest: Do AI providers or the operators of these chatbots have a financial interest in keeping data protection authorities occupied to protect their own practices?
Causality and Alternatives: Is the increase really only explainable by AI recommendations, or have court rulings against Schufa (mentioned in the text) and increased awareness also had an independent effect?
Resource Demand Forecast: What additional staffing would be necessary, and are the federal states willing to fund it?
Complaint Filtering: Should authorities develop criteria to filter out obviously AI-generated mass complaints earlier in the process without losing legitimate concerns?
Systematic Bias: Does automated complaint generation favor certain topics (such as Schufa) and ignore other data protection issues because AI models are optimized for certain training patterns?
Sources
Primary Source: ChatGPT and Co: Inquiries through AI Recommendations Overwhelm Data Protectors – heise.de
Verification Status: ✓ 2025
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-checking: 2025