Maxim Alyukov1; Mykola Makhortykh2; Alexandr Voronovici1; Maryna Sydorova2; 1 University of Manchester, UK; 2 University of Bern , Switzerland
Discussion
Recent debates have raised concerns that the Kremlin may be ‘grooming' large language models (LLMs) by flooding the web with disinformation, causing chatbots to echo Kremlin-linked narratives—particularly through the network of disinformation websites known as Pravda. We conducted an audit of four popular LLM-powered chatbots—ChatGPT-4o, Gemini 2.5 Flash, Copilot, and Grok-2—to test the assertion that Russian disinformation outlets are deliberately grooming LLMs by flooding the internet with falsehoods to make them repeat pro-Kremlin narratives. We found little evidence to support the grooming theory. Only 5% of chatbot responses repeated disinformation, and just 8% referenced Kremlin-linked disinformation websites. In most such cases, the chatbots flagged these sources as unverified or disputed. Our analysis suggests that these outcomes are not the result of successful LLM grooming but rather a symptom of data voids—topics where reputable information is scarce and low-quality sources dominate search results. These findings have important implications for understanding artificial intelligence (AI) vulnerability to disinformation. Our results indicate that the primary risk lies less in foreign manipulation and more in the uneven quality of information online. Addressing this requires strengthening the availability of trustworthy content on underreported issues, rather than overstating the threat of AI manipulation by hostile actors.