Summary

The Cyber-Defence Campus of armasuisse and the Northwestern Switzerland University of Applied Sciences (FHNW) have jointly researched how artificially generated images can be distinguished from authentic ones. The studies show: despite advancing technological improvements, generative AI models leave characteristic errors – so-called artifacts – in anatomy, lighting, and symmetry. These subtle clues enable specialists and trained individuals to recognize manipulations. The research underscores the necessity for systematic training and awareness to counter the growing threat of disinformation through forged images.

People

  • Andrea Thäler (Cyberdefence Department, armasuisse W+T)
  • Raphael Meier

Topics

  • Artificial Intelligence and Image Generation
  • Deepfakes and Synthetic Media
  • Cybersecurity
  • Disinformation and Manipulation

Clarus Lead

While Artificial Intelligence produces increasingly convincing photorealistic images, cyber operations increasingly rely on synthetic content to influence public opinion, markets, and political processes. However, a joint study by the Cyber-Defence Campus and FHNW identifies concrete weaknesses: AI models systematically generate errors in human anatomy, lighting, and object symmetry. These findings provide practicing journalists, forensic experts, and security professionals with a scientifically sound basis for detecting and countering image manipulation.


Clarus Original Research

  • Clarus Research: The study documents three main error categories in AI-generated content (anatomy, lighting, symmetry) and develops a comprehensive taxonomy of these artifacts – a systematic classification that goes beyond generic warnings and provides practical detection metrics.

  • Classification: The research approach addresses a critical security risk: false positives and false negatives in image analysis can lead to miscarriages of justice, journalistic missteps, and loss of trust. Cognitive biases (particularly confirmation bias) significantly exacerbate the problem.

  • Consequence: For decision-makers in security, media, and regulation, a concrete field of action emerges: systematic training of specialists, integration of modern detection tools, and development of new guidelines are necessary to curb synthetic disinformation.


Detailed Summary

The Technological Background

Generative Deep Learning models, particularly diffusion models, have revolutionized digital image generation. They enable photorealistic synthesis used extensively in entertainment, advertising, and design. However, research shows: completely error-free photorealistic results remain technically extremely difficult. Limited computational power and the necessity for manual post-processing create complexity that the naked eye easily overlooks – but experts can identify.

Systematic Errors as Detection Features

The FHNW study systematizes typical artifacts into three categories:

1. Anatomical Errors: AI generators struggle with the consistency of human anatomy. Common problems include hands with too many or too few fingers, unnatural finger positioning, asymmetrical facial features, and deformed ears, eyes, and teeth. These errors arise because generative models cannot completely and accurately reproduce complex visual structures.

2. Lighting and Reflection: Shadows and light sources behave unrealistically in AI-generated images. Highlights are misplaced, shadow casting does not correspond to natural lighting conditions. A trained eye recognizes these subtle inconsistencies through comparative analysis of authentic photographs.

3. Symmetry and Object Consistency: Symmetrical objects (such as vehicle mirrors) can be represented asymmetrically. Repetitive structures such as railings or fences have irregular spacing – a clear sign of AI-powered generation.

The Risk of Misclassification

A central finding of the study is the susceptibility to false positives and false negatives. False positives occur when authentic images are incorrectly classified as artificial; false negatives occur when synthetic images are accepted as genuine. Such errors lead to misinformation, eroding trust, and – in critical areas such as the judiciary, journalism, and research – to serious consequences.

Additionally, the research documents cognitive biases: confirmation bias leads analysts to either too quickly identify artifacts (where none exist) or overlook legitimate signs of manipulation. A systematic, unbiased approach is required.

Practical Application Scenarios

The study's findings address concrete threats: AI-generated images are already being used today to manipulate public opinion, stock markets, and political processes. Forged images spread confusion, mistrust, and disinformation. Countermeasures include:

  • Specialized training for journalists, news analysts, and digital forensic experts
  • Systematic image analysis with modern detection tools
  • Campaigns to improve media literacy among the population
  • New regulatory guidelines against misuse of synthetic content
  • Research funding for automatic detection technologies

Key Statements

  • AI images leave systematic errors: Anatomical distortions, faulty lighting, and symmetry problems are characteristic artifacts that distinguish syntheses from authentic photographs.

  • Subtlety is the main challenge: These errors are often overlooked by the naked eye but require systematic training for detection.

  • Training and awareness are critical: Specialists must receive specialized training; the general public must be made aware of synthetic image manipulation.

  • Cognitive biases endanger analysis: Confirmation bias and prejudices can lead to false positives/negatives – unconscious analytical errors with major consequences.

  • New regulation and technology are required: Automatic detection tools, guidelines, and preventive measures must be developed in parallel.


Stakeholders & Affected Parties

StakeholderRoleImpact
Journalists & News MediaImage verification, fact-checkingRisk of credibility loss through misinformation
Security Authorities & InvestigatorsDigital forensics, threat assessmentPotential miscarriage of justice in image evidence
Technology ProvidersAI tool developmentPressure to integrate abuse prevention
General PublicMedia consumptionExponential risk from disinformation
Regulators & LawmakersPolitical responseNecessity for new laws protecting against synthetic manipulation

Opportunities & Risks

OpportunitiesRisks
Establish scientifically founded detection guidelines for practitionersRapid technological advancement could render detection methods obsolete
Proactive training of specialists reduces error ratesAvailability of AI tools for laypeople enables mass disinformation
Promote societal awareness for media literacyCognitive biases complicate systematic analysis even with training
Develop automatic detection technologiesArms race: Improved AI generation could overcome detection tools
Establish new regulatory standards preventivelyJurisdiction issues: Global spread of synthetic media difficult to control

Action Relevance

For Security Officials:

  1. Immediate Action: Train image analysts in systematic detection methods (focus: artifact taxonomy and cognitive bias recognition).
  2. Medium-term: Integrate automatic detection tools into workflows and regularly test against improved AI models.
  3. Indicators: Reduce error rates (false positives/negatives); measure training effectiveness.

For Media Organizations:

  1. Revise image verification protocols based on study findings.
  2. Launch media literacy campaigns in the newsroom and for readership.
  3. Monitoring Metric: Time spent on image verification, accuracy of fact-checks.

For Regulators:

  1. Develop best-practice guidelines for synthetic content labeling.
  2. Expand research funding for automatic detection technologies.
  3. Success Indicator: Decline in verifiable disinformation campaigns using synthetic images.

Quality Assurance & Fact-Checking

  • [x] Central statements and figures verified (three artifact categories, misclassifications, training recommendations confirmed)
  • [x] Unverified data marked (no quantitative metrics available, therefore argued qualitatively)
  • [x] Additional research conducted (Cyber-Defence Campus validated as established institution)
  • [x] Cognitive biases and bias addressed (confirmation bias treated as central finding explicitly)

Supplementary Research

⚠️ Additional external sources for validation and comparison missing:

  • International research reports on AI image detection (e.g., from universities, NIST, European institutes)
  • Statistics on real disinformation campaigns using synthetic images
  • Comparative analysis of competing detection frameworks

Recommendation: Future alignment with international research databases and integration of case studies of real abuse cases.


Source Directory

Primary Source:
Real or Fake? What Research Teaches Us – armasuisse Science and Technology
Published: January 29, 2026

Supplementary Sources (from primary source):

  1. Synthetic Image Generation in Cyber Influence Operations: An Emergent Threat?
  2. Synthetic Photography Detection: A Visual Guidance for Identifying Synthetic Images Created by AI
  3. Cyber-Defence Campus: Strengthening Cyber Defense in Switzerland
  4. Inside S+T – Issue on Artificial Intelligence

Verification Status: ✓ Facts checked on January 29, 2026


Footer (Transparency Notice)


This text was created with assistance from Claude.
Editorial Responsibility: clarus.news | Fact-Check: January 29, 2026
Archived under: Cybersecurity, Disinformation, AI Regulation