Executive Summary

AI-generated content does not enjoy copyright protection, as copyright only covers human creative works. However, users can achieve protection through post-processing or trademark registration. Recent court rulings show: AI providers are partly liable for copyright infringements in output, but users remain responsible. A clear AI policy in the company significantly reduces risks.

Persons

Topics

  • Copyright and AI
  • Liability risks when using AI
  • Trademark law vs. copyright
  • Court precedent cases
  • Content generation and compliance

Clarus Lead

Copyright protection for AI output remains legally disputed, although a clear trend is emerging: purely AI-generated content is not protected. What is crucial for marketing executives is the insight that human post-processing enables protection – but only with sufficient creative effort. Parallel risks arise from potential copyright infringements in the training material of AI models, which make the user and sometimes the AI provider liable.

Detailed Summary

Basic Principle: Human Creativity as a Prerequisite for Protection

The Copyright Act protects exclusively "personal intellectual creations". A prompt alone does not meet this requirement, as the user only provides direction while the AI makes the final decisions. This principle is confirmed by the lack of court practice – legal experts view it as so clear-cut that lawsuits are not filed.

The situation is different with post-processing: if someone further processes AI output with their own creativity (for example in Photoshop), they can obtain copyright protection – but only if the editing goes "beyond everyday basics". Making colors brighter or adding the company logo is not sufficient.

Trademark Law as an Alternative

In parallel with copyright law, trademark law offers a practical path to protection: an AI-generated logo can be registered as a trademark without copyright protection being present. This protects against copying by competitors – however, the provider must actively use the trademark, otherwise there is a risk of loss.

Liability Risks for Users

Three court rulings show how liability is distributed in practice:

  1. GEMA vs. OpenAI (Munich Regional Court): AI providers are themselves liable for copyright infringements in output, such as when generated song lyrics reproduce existing works.

  2. Voice Deepfake (Berlin Regional Court): AI imitation of voices violates personality rights and requires consent – even with labeling.

  3. Training on Protected Works (Hamburg Court of Appeal): AI providers may use works for training unless the copyright holder has declared a machine-readable usage reservation.

The user bears liability for publications – ignorance does not protect. Even negligent violations (such as unchecked song lyrics) result in full liability.

Key Statements

  • AI output alone has no copyright protection; human post-processing is required
  • AI providers are partly liable for copyright infringements, but users are always liable
  • Trademark law offers a practical alternative to copyright protection
  • Users must verify before publication whether content reproduces copyrighted works
  • A usage reservation ("do not use for AI training") protects against uncontrolled data use

Critical Questions

(a) Evidence & Data Quality:

  1. How can users practically verify whether AI-generated lyrics or images reproduce protected works without forensic expertise?

  2. What specific transparency obligations apply to AI providers to disclose training materials – and are these sufficiently granular?

(b) Conflicts of Interest & Incentives:

  1. Do major AI providers currently benefit from legal gray areas when training on protected works, as long as no lawsuits succeed?

  2. How can copyright holders effectively implement machine-readable usage reservations when there are no standardized formats?

(c) Causality & Alternatives:

  1. Is the thesis tenable that mere "ignorance" on the part of the user reduces liability risk – or is liability objective, regardless of culpability?

  2. Could insurance against copyright infringement in AI output become a practical risk instrument?

(d) Feasibility & Risks:

  1. What specific points should be included in an internal company AI policy to demonstrably minimize liability risks?

  2. How realistic is it that AI providers will improve their filters quickly enough that copyright violations are avoided in practice?


Sources

Primary Source:

Podcast Episode "AI Copyright: Who Owns the Output?" – https://www.buzzsprout.com/2417957/episodes/18666664-ki-urheberrecht-wem-gehort-der-output.mp3

Relevant Court Rulings (mentioned):

  • Munich Regional Court: GEMA vs. OpenAI (Copyright infringement through song lyrics output)
  • Berlin Regional Court: Voice deepfake ruling (Personality rights)
  • Hamburg Court of Appeal: AI training on protected works
  • Kiel Regional Court: User liability despite AI delegation (decision mentioned)

Regulations:

  • German Copyright Act (UrhG), in particular § 2
  • EU AI Act, Article 50 (Labeling requirement for deepfakes/chatbots)
  • UrhG § 44b Para. 3 (Exception AI training with usage reservation)

Verification Status: ✓ 12.02.2026


This text was created with the support of an AI model. Editorial responsibility: clarus.news | Fact-check: 12.02.2026