Author: nzz.ch

Summary

An international team of authors warns in a leading article of the renowned academic journal Science that swarms of autonomous AI agents could influence democratic elections through coordinated disinformation campaigns. Nobel Peace Prize laureate Maria Ressa and Audrey Tang, former Digital Minister of Taiwan, see democracy in danger. However, a Swiss disinformation expert relativizes the danger and argues that voters are not so easily manipulated.

Persons

Topics

  • Artificial Intelligence and Disinformation
  • Democratic Elections and Influence
  • AI-driven Astroturfing Campaigns
  • Trust in Digital Information
  • Confirmation Bias and Polarization

Detailed Summary

The Warning from Prominent Experts

A 22-member author team has published a leading article in the scientific journal Science warning of potential dangers that autonomous AI agents pose to democratic systems. The authors describe a scenario in which thousands of AI-operated agents could coordinate in "malicious swarms" to deliberately distort public debates on digital platforms. Unlike previous bot groups, these systems would act strategically, in a coordinated manner, with learning capabilities, and operate across multiple platforms.

Among the most prominent co-authors are Maria Ressa, a Philippine journalist who received the Nobel Peace Prize for her commitment to freedom of speech and has herself been subject to digital hate campaigns for years. Audrey Tang, the former Digital Minister of Taiwan, is a thought leader in digital democracy. Renowned AI critic Gary Marcus as well as an emeritus professor at ETH Zurich are also involved. The lead authors are Jonas Kunst and Daniel Thilo Schroeder from Norway.

The Concept of Artificially Simulated Consensus

The authors specifically warn of a manipulation mechanism: autonomous AI agents could operate fake social media profiles that purport to simulate different political beliefs. Malicious actors could, for example, operate two parallel networks of one hundred profiles each – one with allegedly Republican, one with Democratic users. While these profiles appear to disagree on controversial topics, they could suddenly find compromise on strategically selected topics. For real users following these debates, the impression would emerge that broad societal consensus exists – a technique known as "astroturfing."

Astroturfing is a manipulation method in which artificially generated networks are disguised as legitimate social movements. The term stems from the American artificial grass manufacturer and serves as a counterpoint to genuine grassroots movements. Until now, such bot networks were costly and laborious to operate. However, autonomous AI agents will dramatically lower these costs. A single person could control thousands of AI agents. Language models easily adapt their choice of words to target audiences and imitate human behavior through limited posting quantities and strategic time windows. Through A/B testing, they could continuously increase their persuasive power.

Relativization by Disinformation Research

Not all experts share the alarmist stance. Sacha Altay, researcher at the University of Zurich, relativizes the danger based on empirical studies: the influence of deliberately disseminated false information on elections and votes is negligible. He doubts that AI agents can fundamentally overturn public debate in democracies.

Although language models like ChatGPT can argue convincingly in intensive conversations, Altay questions whether people in social networks would really spend hours debating politics with strangers. Most use social media for entertainment, not for political opinion formation. If a user was interested in politics, they would certainly find serious journalistic sources on the platforms.

The Longer-Term Perspective

Despite Altay's relativization, it is still too early to classify AI agents as harmless to democracy. AI is continuously improving, and the distinction between authentic and computer-generated content is becoming increasingly blurred. This could lead to a fundamental loss of trust in information.

Research shows a paradoxical phenomenon: people primarily believe information that fits their existing worldview. AI agents therefore might not necessarily change viewpoints; instead, they could cause people to entrench their political preferences and increase polarization.

Key Findings

  • Autonomous AI agents could create artificially generated consensus and manipulate public debates through coordinated astroturfing campaigns
  • The costs of such manipulation campaigns are dropping dramatically, as one person can control thousands of AI agents
  • Prominent experts such as Nobel Peace Prize laureate Maria Ressa warn of threats to democracy
  • Disinformation research shows that the actual influence of disinformation on elections has so far been minimal
  • People preferentially believe information that matches their convictions, which could reinforce polarization
  • The long-term impacts of autonomous AI on democracies are not yet fully foreseeable

Metadata

Language: English
Publication Date: 23.01.2026
Source: https://www.nzz.ch/technologie/schwaerme-aus-ki-agenten-koennten-schon-bald-demokratische-wahlen-beeinflussen-warnen-prominente-experten-wirklich-ld.1921487
Author: Gioia da Silva
Reading Time (Original): 4 minutes
Text Length: approx. 4,800 characters