
Large language models (LLMs) and generative artificial intelligence systems are rapidly becoming key intermediaries in the global circulation of news and political knowledge. Yet the datasets that feed these systems are neither neutral nor universal. Governments, regulatory bodies, and corporate ecosystems increasingly treat training data as a strategic resource, capable of encoding national priorities, cultural hierarchies, and geopolitical narratives into algorithmic infrastructures.
This article is based on the recent publication Feeding the Algorithm: How Nations Shape AI Training Data to Project Power and Influence Global News Narratives, authored by Nikos Panagiotou and Ioannis Tzortzis, and published in Volume 5, Issue 2 (2025) of the Journal of Global Strategic Studies.
The paper develops the concept of algorithmic diplomacy, arguing that states are actively curating, regulating, and disseminating datasets as a means of projecting soft power through AI systems. Drawing on theories of epistemic sovereignty, digital colonialism, and media framing, it identifies three core mechanisms shaping this process:
-
Data curation and localization
-
Model fine-tuning and regulatory alignment
-
Narrative seeding through open-source ecosystems
Comparative illustrations from the European Union, the United States, China, Russia, and the Gulf states demonstrate how “feeding the algorithm” has become a new instrument of influence within the global information order. The analysis concludes that algorithmic infrastructures now function as a form of epistemic territory, a contested space where values, identities, and political legitimacy are negotiated through data rather than discourse.
Read the full publication here


