Skip to main content

In early March, Meta, the parent company of Facebook, Instagram, and WhatsApp, announced that it would suspend its hate speech policy regarding Russia and Russian soldiers, as well as against Russian President Vladimir Putin himself, meaning it would allow hate speech against Russians. This new policy has been implemented across Eastern Europe, and unfortunately played into the Russian narrative that Russians are the ones being victimized. It contributed for instance to a recent disinformation campaign on Twitter, launched through Russian embassies’ social media accounts, included the hashtag #StopHatingRussians. This particular disinformation campaign comes as a result of isolated incidents of violence directed at Russians and Russian speakers, but is framed in such a way as to portray all Russians and the Russian state as the victim of Western aggression.

This revised hate speech policy sets a dangerous precedent for the way that platforms interact with war. While the policy update reflects the frustration felt by many across Eastern Europe, there is a real danger in allowing commentators and social media users to use this as an excuse to target the Russian-speakers and Russian citizens living abroad as a proxy for the Russian state. The policy update thus serves as fodder for the Russian disinformation apparatus.

The Nature of Russian Disinformation

To fully understand the impacts that disinformation can have, it is important to distinguish it from misinformation. Disinformation is content that is created that is known to be false or misleading at its creation, and whose purpose is to misguide consumers of that content. This is contrasted with misinformation, which is information that is false but spread in good faith as if it were true.  The best disinformation is founded on an element of truth, so its capacity to affect public opinion and policy is all the more powerful. Russian disinformation works using a strategy developed during the Soviet era that seeks to exploit existing social realities, and this has led to mixed results in terms of its effectiveness both within its own territory and internationally. Russian disinformation campaigns work by seeing what will stick and become effective. By exploiting internal divisions, Russian disinformationists rely on current events to shape their campaigns.

Regardless of how believable a piece of disinformation may be or whether Russia is revealed to be the source of the disinformation, the targeted nature of such information warfare campaigns can serve as a cover for Russian actions. With this understanding of disinformation, it is possible to see how the Russian information warfare apparatus positions itself as a buffer between reality and the narratives used to muddy the waters. The above example of the campaign #StopHatingRussians is a perfect example of how Russian disinformation works. By wrapping a narrative of victimization around a kernel of truth, Russia can attempt to obfuscate its aggressive violations of Ukrainian sovereignty.

Russia’s waging of information war across the world in the pursuit of its illiberal aims has a long history and works toward the reestablishment of Russia’s position as a great power. First, it seeks to exacerbate domestic divisions in target countries and weaken the capacity of target countries to oppose Russian strategic aims. Second, it seeks to bolster Russia’s soft power influence and aid Russia in winning the hearts and minds of people in countries where Russia has an interest in foiling Western involvement. By using disinformation, Russia can demonstrate the vulnerabilities of democratic systems to third party observers, and in so doing discourage countries from aligning themselves with the United States and the European Union. Conspiracy theorists and right-wing activists can act as the Kremlin’s contemporary useful idiots, with recent accusations of nefarious biolabs being parroted by commentators of the mainstream right in the United States such as Tucker Carlson as well as followers of the conspiratorial movement QAnon.

The prevalence of Russian disinformation is widespread and dangerous, but it relies on the biases of its consumers to be effective. It is thus made more effective by using the choices and policies made by Western technology companies against the interests of liberal democracies. By using Big Tech’s own interest in profit against it, Russian disinformationists confound the discourses found in cyberspace. Central to the problem of Big Tech’s conflicted response to the invasion of Ukraine is the business model that has made it, as an industry, fantastically wealthy.

In the wake of the Russian invasion of Ukraine, Russia’s information war, operating alongside its kinetic war, has left Russia in a position of weakness as it struggles to convince the world that its invasion is being carried out to protect Ukrainians from their own government. Nevertheless, there have been several instances in which Western journalists and commentators have played into the hands of Russian talking points, such as when Candace Owens called Ukraine a Russian invention circa 1989. The American right wing has cozied up to Putin based on his support for traditional values, which some on the right see as standing in opposition to the progressive culture of the contemporary American left. While Russia often makes outlandish claims to divert attention away from reality and create an information atmosphere that is intentionally confusing, the result is a media ecosystem and a social media environment that can’t agree on facts. The latest and most appalling example is the debunked Russian claim that the Bucha Massacre was an orchestrated attempt by Ukraine to garner international sympathy.

The role of big tech

Meta’s suspension of its hate speech policies is by no means the only instance of American tech firms wading into the conflict in the online portion of Russia’s invasion of Ukraine. They have come out against the war, but in the case of Google, there is a gross display of hypocrisy. While the stated position of Google reads, in part, “Our teams are working around the clock to support people in Ukraine through our products, defend against cybersecurity threats, surface high-quality, reliable information and ensure the safety and security of our colleagues and their families in the region,” the reality created by Google’s translation policy only furthers the efforts of Russian disinformationists in their attempt to misrepresent the invasion as anything other than a war.  Because Russian censors have demanded that, on penalty of up to 15 years in prison, journalists call the invasion a “special military operation” instead of a war, Google’s translation policy follows the requirements set forth by Putin’s regime. Though the vast majority of states are against the war, the translation policy makes it more difficult for Russian citizens to understand the full extent of the atrocities being committed in their name.

Tech companies are therefore operating by playing both sides of the conflict. By enacting policies that sometimes aid the Ukrainian cause and at the same time the Russian cause, as is the case exemplified by the Google translation policy, they pay lip service to the cause of humanitarianism and liberal values, while staying present in as many markets as possible. The Google example is instructive because it allows us to see how, when presented with the very real prospect of being forced out of Russia, tech companies chose a policy of appeasement rather than maintaining basic standards of information hygiene.

Part of Big Tech’s business strategy in the context of the ongoing war does include meaningful attempts to offer support for Ukraine, and these policies are laudable, but such efforts take place amid a confused overall response to the war. As the effects of the war become globalized in the form of grain shortages, the halting of the Nord Stream 2 gas pipeline between Russia and Germany, and the battery of sanctions placed on Russia, the potency of information warfare is ever increasing. International technology firms will need to ensure that their policies are in sync with their statements to effectively counter Russian disinformation. Further, they need to be careful about the choices they make, as when dealing with Russia’s significant capabilities in information warfare.

Conclusion

At best, the apparent willingness of tech platforms to turn a blind eye to Russian disinformation or quietly allow for its proliferation makes them perpetuators of a techno-libertarian aloofness, and at worst, greedy hypocrites, as they try to balance their stated desire to support humanitarian efforts in Ukraine while remaining solvent in Russian markets. Big Tech has placed itself at the center of Russia’s information warfare machine and must now navigate a way out. Doing so will require intentionality, including placing people above profit.


Grant A. Silverman holds a master’s degree from George Washington University’s Elliott School of International Affairs where he studied conflict and conflict resolution, disinformation, and right-wing extremism. His research has paid special attention to the rise of illiberal and extremist movements in the USA and EU. He is especially interested in the roles that digital media and disinformation have played in the rise of violent political movements. Prior to his studies at the Elliott School, he earned his bachelor’s at the University of South Carolina in International Studies and German. 

Photo made using: “Наслідки обстрілу дитячої лікарні та пологового будинку в Маріуполі, 9 березня 2022 року,” by armyinform.com.ua licensed under CC BY 4.0.

X