US NewsWorld News

Report Warns AI Chatbots Are Amplifying Foreign Propaganda Through Citations

The report, released by the Center for a New American Security (CNAS) on April 3, 2026, examined how major AI models handle queries on sensitive geopolitical topics.

Tommy FlynnTommy Flynn
Report Warns AI Chatbots Are Amplifying Foreign Propaganda Through Citations

WASHINGTON – A new study warns that leading AI chatbots are increasingly citing and amplifying foreign propaganda in their responses, particularly from adversarial nations such as China, Russia, and Iran.

The report, released by the Center for a New American Security (CNAS) on April 3, 2026, examined how major AI models handle queries on sensitive geopolitical topics. Researchers found that when chatbots retrieve information from the open web or their training data, they frequently surface and repeat narratives pushed by state-affiliated propaganda outlets without sufficient disclaimers or counterbalancing context.

The study tested models including OpenAI’s GPT series, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok on topics such as the Russia-Ukraine war, the Israel-Hamas conflict, Taiwan, and U.S.-China relations. In multiple instances, the chatbots cited or paraphrased content from known propaganda sources (such as RT, Xinhua, Press TV, or Global Times) as if it were neutral information.

Key findings include:

  • AI systems often treat state propaganda outlets as legitimate news sources when they appear in search results or training data.
  • Models frequently fail to flag biased or state-sponsored content, leading to the unintentional spread of disinformation.
  • The problem is exacerbated by the models’ tendency to prioritize “authoritative-sounding” citations over source credibility.

The report notes that this vulnerability is not limited to any single company. All major AI developers have acknowledged the challenge of combating propaganda in their systems, but researchers say current safeguards remain inadequate.

The findings come as AI chatbots are increasingly used as primary information sources by millions of Americans. Critics argue that the unchecked amplification of foreign propaganda through AI could distort public understanding of major international events and undermine trust in democratic institutions.

The Trump administration has previously highlighted concerns about AI being used to spread disinformation, particularly from China. Officials have called for stricter oversight of AI training data and greater transparency from tech companies regarding how they handle foreign influence.

The CNAS study recommends that AI developers implement stronger source-verification protocols, clearly label state-affiliated media, and provide users with warnings when responses rely on potentially biased sources. It also urges Congress to consider legislation requiring greater accountability for AI systems that amplify propaganda.

This is the latest warning about the national security risks posed by large language models in an era of sophisticated information warfare. As AI becomes more integrated into daily life, the ability to resist foreign influence operations will be critical to maintaining information integrity.