An increasing number of people refer to social media as their primary source of news, but this is problematic as social media and the internet have become sites of Fake News proliferation. The many algorithms and bot technologies available on social media platforms are major enabler of Fake News. Echo chambers thrive on Facebook as news feed algorithms tailor media content to the personal interests of individual users, thus leading many people to believe Fake News that reflects their personal opinions, and then share it within their social media networks. Further, bots can be programed to widely circulate Fake News and create a false impression of consensus over a given issue or opinion. The consequences of Fake News can be intensified political polarization, distorted public opinion, and election misinformation.
New communications research on the “two-step flow” offers valuable insights on how social media allows people to bypass mainstream media sources and receive information—or misinformation—through social contacts. This theory proposes that media information flows from the source to local ‘opinion leaders’ who interpret and share it with their social network. It highlights the active role of media users in selectively engaging with information. In the context of Fake News, we can see how misinformation might be circulated on social media by ‘opinion leaders’ who are either real people or bots.
Researchers Sterrett et al. (2019) conducted a recent survey experiment of American adults that simulates social media posts by either a trusted or untrusted public figure and directs respondents to an article manipulated to come from either a reputable news source or a fake news source. Their findings reveal that those who more regularly use social media to get news are especially likely to pass along information just because they trust the person who shared it. Unfortunately, this could further increase the spread of misinformation especially when malicious bots have been shown to engage massive audiences through intense retweeting of Fake News, creating the impression that thousands of voices shared the same opinion. This Fake News eventually reached politicians and news personalities—real ‘opinion leaders’—who retweeted it, thereby extending the Fake News to an even larger audience.
Research also shows that people are affected by Fake News as a result of “the spiral of silence” in which minority ideas become silenced in public discourses due to a perceived lack of support or popularity. This silencing allows dominant views to advance uncontested and creates an exaggerated picture of consensus. A key dimension in the spiral of silence theory is that individuals fear social isolation, prompting them to conform to dominant social views.
The media plays an important role in this because people refer to the media to determine which views are popular and which are not represented. Thus, if Fake News circulates and garners significant consensus, people with dissenting views are more likely to become silenced—thereby allowing Fake News to perpetuate. Researchers Spaiser et al. (2017) studying the spiral of silence in Russian political discourse on Twitter during 2011-2012 found that anti-Putin messages originally dominated Twitter, but then there was a surge in pro-Putin messages from a group of highly active Twitter users, and possibly bots. Consequently, the political discourse on Twitter began shifting to predominantly pro-Putin (whether by genuine supporters or fake bot accounts), leading opponents to perceive a decreased lack of support for their perspectives which ultimately caused them to stop tweeting in protest of Putin. This demonstrates how fake news organizations can disempower critics and successfully manipulate public opinion on social media.
The results of these studies are concerning because Fake News often works to incite polarization by negatively misrepresenting particular groups and perpetuating social inequalities.