An anonymous conspiracy theory on 4chan eventually became a political talking point on the national stage is YouTube to blame?

An edited transcript of President Trump’s July 25th phone call with Ukrainian president Volodymyr Zelensky has spurred a tumultuous impeachment inquiry since its release in late September. Though demands for Trump’s impeachment had been building for months, the release of the transcript swayed enough moderate Democrats to open an official impeachment inquiry.

The contents of the call weren’t exactly revelatory, since they corroborated an already-public whistleblower report. But President Trump’s transcript included a mysterious name with which many readers are unfamiliar — unless they have spent some time perusing conspiracy-theory-saturated regions of the web. The transcript left many Americans puzzling over the question: what is “CrowdStrike”?

A Conspiracy Theory Goes Global

Part of the transcript reads as follows: 

“I would like you to do us a favor though because our country has been through a lot and Ukraine knows a lot about it….I would like you to find out what happened with this whole situation with Ukraine, they say CrowdStrike…I guess you have one of your wealthy people… The server, they say Ukraine has it.”

For those unfamiliar with the nuances of alt-right YouTube jargon, President Trump’s reference to CrowdStrike recalls a conspiracy theory that circulated online during James Comey’s Congressional testimony in 2017. 

This theory, which first surfaced in March of 2017 on 4chan, the notorious anything-goes message board that serves as a hub of internet subculture, is colloquially called “The Insurance Policy”. The theory claims that Hilary Clinton 2016 presidential campaign fabricated the narrative of Russian interference in a series of 2016 DNC email leaks as an “insurance policy” to remove Trump from office, in case she were to lose the election. According to the theory, the forensics report was falsified by the cybersecurity firm CrowdStrike. 

The theory has since spawned countless YouTube conspiracy theory videos, some of which have garnered as many as 132,000 views — and somehow it made its way into the President’s ear. How did such a rumor rise from the obscurity of message boards into mainstream Republican discourse? The answer, though complex, may have something to do with YouTube itself, which, in the last few years, has come under fire for propagating unchecked and dangerous conspiracy theories from across the political spectrum to unprecedented degrees. 

A Heyday for Fake News

“Fake news” — an internet-age term for disinformation or propaganda that is purposefully deceptive but framed as real, verifiable news — permeates our online spaces. Fake news abounds on social media, on message boards, and in pay-per-click advertisements. In fact, as of March 2019, 52% of Americans reported that they had shared a news story online that they either knew was fake at the time or later discovered was fake. Discerning internet users must constantly ensure their news can survive a thorough fact-check. 

Most fake news is in some ways damaging — whether endangering the health of its readers or spurring distrust in government. Some of the most detrimental stories of the last few years have directly impacted democratic processes and politics across the world. And while the blame for this propaganda pandemic cannot be placed on just one website, as America’s most-used social media site, many have vocally blamed YouTube. 

According to a 2018 study by the Pew Research Center, 91% of young Americans are YouTube users. As anyone who has spent significant time on YouTube has probably noticed, conspiracy theory-peddling videos are a major part of YouTube’s repertoire. Conspiracy theory videos on YouTube can seem as plentiful as legitimate news sources — and, in some cases, they are. According to an Aachen University study, more than half of YouTube videos on climate change deal in non-consensus views.

Why does YouTube have a propaganda problem? The simple answer: YouTube’s algorithm favors content advocating for extreme ideas. The system gives preference not to mere clicks, but to engagement and view time. If a user remains on a video page for somewhere between one and thirty seconds, it is considered a “view”. Since higher user engagement — or longer time spent viewing a video — corresponds to increased pay-per-click ad spending, the more views a video gains, the more YouTube’s algorithm promotes it.

For propagandists wishing to tap into America’s most popular social media site, YouTube’s algorithm can provide a fast track to success. With viewbots, creators can shell out as little as $15 and garner 5,000 “views” on their video. Since YouTube’s algorithm interprets even these fake “views” as real audience engagement, it may promote these videos in users’ “Up Next” sidebar. 

The “Up Next” sidebar — which was once a sidebar of recommendations but has been updated to play videos automatically (in a further attempt to keep users engaged and hike up ad revenue) — includes videos that are suggested based on a combination of users’ view history, popularity, and relevance

If an alt-right YouTuber has a cursory understanding of how the system works (and how to outsmart it with viewbots that boost their own exposure), their Insurance Policy video might appear to be quite popular according to the algorithm. Just like that, a homemade conspiracy theory video might pop up in the “Up Next” sidebar of practically any user. 

What’s more, since the algorithm relies in part on “relevance”, YouTube will recommend increasingly extreme content as users follow its rabbit hole of suggestions. 

YouTube Takes a Stand Against Fake News

YouTube has recently publicized major initiatives to curb its fake news crisis — perhaps out of altruism, or perhaps due to fears of a fiasco à la Facebook’s 2018 congressional hearing

Regardless of YouTube’s motives, a storm of complaints have caused the company to invest $25 million to “better support trusted news providers,” an initiative that includes an “information panel” with mainstream news perspectives, as well as text articles to accompany breaking news. Earlier this year, the platform introduced a fact-checking feature to appear alongside viral conspiracy theory and fake news videos.

YouTube is supposedly still in the process of rolling out anti-fake news protections.

Unfortunately, the bounty of conspiracy theory videos on YouTube remains apparent. A quick YouTube search will reveal that, soon after viewing political commentary videos, conspiracy theory videos begin appearing prominently on the “Up Next” sidebar. 

How Can We Prevent Another “Insurance Policy”?

Many social media sites, including giants like Facebook and Twitter, are susceptible to fake news of all varieties, making it unlikely that YouTube is to blame for all or even most of the propagation of theories like the Insurance Policy conspiracy theory.

If fake news is everywhere we turn, what is the solution? Fact-checking sites like factcheck.org and politifact.com have attempted to remedy America’s fake news crisis by offering transparent, reliably-sourced information to verify claims made on both sides of the aisle. With a bit of promotion, fact-checking sites could secure a more prominent place in American politics. 

Political philosophers provide a less rosy perspective to the future of fake news. What happens if a political candidate claims the fact-checking sites cannot be trusted? Prominent academics in the field indicate that, even with a conspiracy theory-free YouTube and with publicly available fact-checking sources, propaganda might still find its way into American living rooms.

According to Hannah Arendt, a leading scholar on propaganda, the success of propaganda is a product of the sociopolitical moment — not merely a faulty source of information. In her book The Origins of Totalitarianism, Arendt dissects the rise of propaganda in Nazi Germany and in Stalinist USSR. Arendt observes that, when Germans considered the world “ever-changing and incomprehensible…[the] audience was ready at all times to believe the worst, no matter how absurd.” Arendt explains how, in a world where people doubt their ability to access the truth, they can be easily led astray. 

The popularity of YouTube’s conspiracy theory videos shows, perhaps cynically, that Arendt might be right — Americans feel so overwhelmed by untrustworthy information that we’re willing to believe anything that seems suitably terrible. If this is indeed the case, it will take a lot more than fact checkers and “information panels” to rearrange our warped distinction between truth and fiction. 

In the meantime, it’s up to consumers to become critical consumers of media and to shoulder the burden of validating the information they encounter. With the help of organizations like like factcheck.org and politifact.com, we can all access the truth. The broader systemic changes that must occur to oust fake news from our social landscape will be the responsibility of all of us.