Skip to content

The Kremlin’s Bots, Trolls, and Influencers

Russia’s information war against the West is a comprehensive, coordinated effort to manipulate the information ecosystems of entire societies.

· 10 min read
Putin and a background of the Russian flag with data on it.
Canva.

Disinformation is a term often deployed by politicians, journalists, and partisans as a catch-all to discredit opposing views and justify the censorship of inconvenient opinions. But disinformation is, nevertheless, a real problem and tackling it is a challenge we must not shy away from.

In its most nefarious form, disinformation refers to lies spread by the agents of enemy states, whose intelligence services are hoping to gain some strategic advantage by deliberately manipulating public opinion. Disinformation is the main weapon of choice in the information war currently being waged against the West. At least 57 states have employed social media bots to amplify their messaging, including Iran and China. But the most active belligerent in this war is Vladimir Putin’s Russia.

The Russian passion for information warfare is in part the legacy of a Soviet strategy of disinformation that has been reinvigorated by the advent of social media. In the 1960s, for example, the KGB launched Operation Infektion: an effort to popularise the conspiracy theory that the AIDS virus was created by the US government as a biological weapon to be deployed against the black community. Astonishing numbers of black Americans believed this lie until at least the early 2000s.  

At home, Russian intelligence agents employ online personalities to spread their propaganda. A Vice News investigation, for instance, found that Russian TikTok influencers were paid to post videos pushing pro-Kremlin narratives about the war in Ukraine. An anonymous Telegram user told these social media personalities—some of whom had over a million followers—exactly what to say, which hashtags to use, and when to post said content. 

When operating in other countries, whose media personalities are generally less willing to recite monologues scripted for them by Russian intelligence, the state has to resort to subtler tactics. These include offering financial support to commentators whose genuinely held beliefs happen to dovetail with the Kremlin’s own narratives. A particularly high-profile example of this made headlines back in September, when conservative commentators Tim Pool, Dave Rubin, and Lauren Southern found themselves at the centre of a foreign influence scandal, covered in more detail elsewhere in this magazine.      

Kremlin Cash
The Tenet media scandal and the convergence of right-wing American punditry and Russian propaganda.

The company they produced content for—Tenet Media—was listed as the recipient of over $10m USD paid by Russian state media. That doesn’t mean that Tenet Media was handing Pool and others scripts dictated by the Kremlin. On the contrary, the commentators in question have emphasised that they have always maintained editorial control over their output. But these influencers have been largely critical of Western support for Ukraine—a viewpoint that the Kremlin would like to signal boost in the American public sphere. 

In addition, Pool et al. are active combatants in the online culture wars that have driven polarisation to record levels—which is the second pillar of Russia’s information warfare strategy. The financing of provocative commentators like these—some of whom were paid up to $400,000 USD every month—enables Russian agents to launder pro-Kremlin narratives through Western speakers. And serves to embolden media personalities whose earnings are dependent on sensationalist rage bait. This is not a new strategy. There have always been journalists on the books of Russian intelligence. But it is only one aspect of a coordinated effort to flood the information ecosystem with views sympathetic to the Kremlin.

Another pillar of this strategy involves botnets: swarms of fake, automated accounts created and controlled by Russian agents. These botnets hijack trending algorithms on social media platforms and thereby expose Western audiences to particular messages. They often share precisely worded posts to ensure that certain phrases—such as #WelcomeRefugees or #EndNetZero—trend in people’s news feeds. And they incessantly ‘like’ and repost content from pro-Kremlin influencers and trolls.

The aim here is twofold. First, it is an attempt to expand the reach of what would otherwise remain fringe opinions. They expose ever more people to Russian propaganda, and help create an illusion of popularity that serves to normalise pro-Kremlin talking points. But this is not merely intended to convert people into passionate Putinistas. The second goal is to promote the most inflammatory and divisive voices on either side of a given political issue. This helps spread the distrust and disillusionment that are so corrosive to liberal democratic society—and which have reached record levels on social media.  

From the perspective of Russia’s Federal Security Service (FSB), the successor to the KGB, this outrage benefits Russia in a classic zero-sum sense—by weakening its opponents, distracting them with domestic infighting, and thereby undermining their collective ability to promote an effective and coherent foreign policy. This is why Russian bots have promoted content from all sides of the political spectrum. Some bots describe Western society as being in the grips of a totalitarian form of feminism, for example, while others accuse Western feminists of failing their commitment to intersectionality. The Kremlin bots weigh in on vaccine debates, taking only the most hardline positions on either side. They also swelled the follower counts of many pro-Palestinian social media influencers in the wake of the 7 October massacre.

The next layer of Russia’s disinformation campaign consists of troll farms. These tend to be run out of office buildings in Ghana and Nigeria, where Russian agents hire local people to create fake social media accounts in which they masquerade as Western citizens. These anonymous commenters spend their working days sharing pro-Russian propaganda and engaging in online debates—often in the most argumentative, irrational, and inflammatory ways possible.

In the run-up to the US election in 2016, for example, Russian intelligence agents set up Facebook groups catering to all sorts of partisan demographics, including right-wing Christian groups, black nationalist groups, and Muslim groups. In each of these Facebook groups, Russian trolls would post inflammatory content designed to drive a wedge between said group and society at large. Christian groups were flooded with content describing Hillary Clinton as satanic and charting a fictional alliance between Barack Obama and the Muslim Brotherhood. In pro-black groups, on the other hand, they shared fake stories of KKK attacks and videos of white and black people fighting each other in the streets. One of these Russian-operated groups, Blacktivist, had more followers than the official Black Lives Matter page. The idea here was not to convince anyone or to engage in a spirited exchange of ideas. It was to enflame online debates with the ultimate goal of promoting the kind of hopelessness about liberal society that might leave people vulnerable to autocratic ideology.

Influencers, trolls, and bots are all sharers of disinformation. They take Kremlin lies and pro-Putin talking points and amplify them. But Russia’s information war also involves the production of disinformation: fabricated evidence that serves as “proof” of the Kremlin’s claims. Sometimes, this involves actual intelligence agents operating, often on foreign soil, to generate “alternative facts”—to borrow a particularly Orwellian phrase from Trump’s former campaign manager, Kellyanne Conway.

One illustrative example dates back to 2017. Donald Trump and Tucker Carlson—two reliable peddlers of pro-Kremlin talking points—claimed that Muslim refugees were responsible for a tidal wave of crime in the Swedish capital. The Swedish authorities denied that claim, but a firestorm of media controversy ensued. Whether or not Trump and Carlson’s allegations were true is immaterial here.  What does matter is that Russia’s information warriors decided that they needed a little boost. So a few days later, a Russian film crew appeared in Rinkeby, a suburb of Stockholm with a large Muslim immigrant population and were spotted offering cash to any passers-by willing to stage a riot for their cameras. 

These kinds of operations, almost certainly sponsored by the Russian state, are becoming a regular occurrence. Prosecutors in France recently accused Russia’s FSB of orchestrating a graffiti campaign in which rows of buildings were marked with Stars of David—the aim being to heighten existing (and often justified) fears about a resurgence of antisemitism, fears that have been fuelled by the Western response to 7 October 2023. This example provides a particularly clear illustration of how Russian disinformation works. The Moldovan couple hired to paint the graffiti were accompanied by a photographer, whose job was to take pictures of said graffiti for social media. These pictures were then picked up by the so-called Doppelgänger Network—a collection of Russian-operated websites and social media pages designed to imitate existing newspapers and media outlets, such as the Guardian. Their fake articles, featuring these carefully staged pictures, were then plastered across social media by an army of automated accounts that exist only to ‘like’ and share one another’s doctored posts—thus hijacking the trending algorithms social media platforms employ to feed popular content to users. The result was a firestorm of fear and outrage over something that never happened. This caused so much panic that the mayoralty of the 14th arrondissement compared the situation in the city to that of the 1930s, “which led to the extermination of millions of Jews.”

Such operations, conducted by Russian state assets, deliberately engineer fake crises that exploit existing political tensions. The advent of generative AI—especially the text-to-video tools that are improving with each passing day—has made these operations cheaper and easier than ever before. But they can be done the old-fashioned way, too. A series of videos was recently released that undermined the credibility of US election procedures. Claiming to be official FBI material, these videos purported to show election officials destroying mail-in ballots for Trump. To those who know what US election materials actually look like, it was clear that the events were stage-managed. But to the average social media user, that might not be so obvious. And though the Office of the Director of National Intelligence, the FBI, and the Cybersecurity and Infrastructure Security Agency (as well as the BBC in the UK) all quickly warned viewers that the videos were of Russian origin, that didn’t stop them from reaching millions of eyeballs—particularly on X, where Elon Musk has tweaked the algorithm in favour of the MAGA Right.

Thought they often seek to simply confuse and enflame, Russian agents do occasionally advocate specific policies. One Russian military textbook, for example, recommends using information warfare to undermine NATO’s commitment to Article 5. More surprisingly perhaps, Scottish independence was also a Russian aim. When the referendum of 2014 failed, an army of Russian bots and trolls complained that the vote had been rigged. Two years later, the Kremlin once again promoted the breakup of a national alliance when tens of thousands of Russian and Iranian bots shared pro-Leave messages in the run-up to the Brexit referendum.

It is important not to overstate the influence of these Russian efforts. It is often all-too-convenient to dismiss the success of one’s political opponents as due to “foreign influence.” Even now, some liberals are attributing Trump’s resounding election victory to foreign disinformation. This is a step too far. The real danger isn’t that Russian agents might swing an election. There is simply too much online content out there, from too many sources, for Russia to be able to exert an overwhelming influence on the overall media environment. What they can do, however, is increase mutual hostility between the most politically polarised groups by feeding such people sensationalised falsehoods, which reinforce their already extreme beliefs. Aided by a revenue model that allows advertisers to market their posts to specific groups, Russia can stoke distrust and division by flooding the most partisan spaces with doctored content. And in this way, information warfare allows foreign agents to stoke the division and distrust that threaten to destroy liberal society.

Often, this involves exploiting pre-existing weak points in our social fabric. The AIDS conspiracy, for example, built on the black community’s general distrust of the US government—a distrust founded on genuine historical events such as the notorious Tuskegee experiments, in which the US government secretly withheld treatment from black Americans with syphilis so they could observe the progression of the disease. In a similar fashion, the antisemitic graffiti that popped up in Paris exacerbated a problem that already existed and helped to intensify the tensions between Jews and Arabs, and between supporters of Israel and those of Palestine. Disinformation campaigns like these can be restricted to conspiracy-addled Telegram chats and 8chan forums for years—but in moments of political crisis, they can pose a genuine threat to the social order.

Russia’s enthusiasm for information warfare is partly inspired by the collapse of the USSR, which Russian historians routinely blame on an “imperialist information war” that Russia lost. This perspective was strengthened by the events of the Arab Spring, in which protestors coordinated over social media to create a movement that terrified the world’s autocrats. Although it proved unsuccessful, the uprising offered a taste of the way in which America and its liberal allies might use the internet to undermine the very basis of authoritarianism: the control of information. This is why Russian assets, having witnessed the events of 6 January 2021, spent so much time promoting conspiracy theories about electoral fraud in the run-up to the US election. In a way, it is lucky that Trump won, given how pervasive electoral denialism has become among Republicans. Had he lost by a tight margin, and had this sleeper army of Russian trolls, bots, and influencers kicked into action, producing and disseminating fake videos of election rigging, who knows what damage they could have done?

Of course, all these problems—partisan polarisation, conspiracy theories, and political spin—existed long before Russia got involved and would still exist if Russian disinformation vanished altogether. But in a world where people get much of their news from social media, it is easier than ever before for foreign actors to exploit political and social tensions for their own purposes. Never has it been easier for foreign intelligence agents to masquerade as British, French, or Australian citizens and to speak directly to those audiences. In their endless scramble to farm our attention, social media companies are exposing the citizens of liberal democracies to foreign adversaries looking to exploit the openness of our societies.

And it is liberal societies that are most threatened by disinformation. This is not just because the events of the Arab Spring encouraged autocratic governments across the world to tighten their grip over the internet. It is also because our support for freedom of expression—a bedrock value of liberal society—leaves us vulnerable to exploitation. Specialised government agencies, like Russia’s Internet Research Agency, are training the information warriors of the future. And Russian politicians have spoken openly of their plans to use “non-military capabilities to incite chaos and instability” around the world.

In The Art of War, Sun Tzu observes that the most skilled warriors learn to subdue the enemy without fighting. This is the ultimate aim of disinformation. As one former Russian military intelligence officer has put it: “a new type of war has emerged, in which armed warfare has given up its decisive place in the achievement of the military and political objectives of war to another kind of warfare—information warfare.”

So long as we in the West weaponise the word “disinformation” as a slur against our political opponents, we will continue to be exposed to this war, which is being waged on an unprecedented scale and poses a threat to our hard-fought liberties.

On Instagram @quillette