In 2018, MIT released a study indicating that fake news stories spread faster than real news on Twitter. This research garnered significant press attention at the time, but one of the most important conclusions was often overlooked: bots spread real news and fake news at the same rate, while humans were responsible for spreading most of the disinformation. These results have unfortunately been repeatedly validated in the early days of the COVID-19 pandemic. Disinformation was repeatedly amplified by ordinary people across all social media platforms including Facebook and Twitter, independent from bots or non-human actors.
These findings point to fundamental problems in the way social media is constructed, and how that structure can be actively harmful at the most important moments such as worldwide emergencies and elections. Malicious actors leverage the lack of information exposed to ordinary social media users, the polarization exacerbated by social media, and the increasing cynicism about accurate information online in order to spread disinformation. Social media companies do not yet provide adequate tools for their users to distinguish between real news and misinformation. Instead social media services are optimized for engagement and controversy to maximize time spent on sites and therefore advertising revenue.
Hello, My Name Is Human
Meet Duncan Gilbert. Duncan, and his 900 friends, are part of a network of sockpuppets – accounts that pretend to be different people but are actually run by the same person - that Facebook took down in December 2019. Did you know he wasn’t real? Could you tell, based on the information Facebook showed you? Over 442,000 people engaged with content created by “Duncan” and his ilk.
Our original findings paint an even bleaker picture. Norton Labs conducted a study where we presented participants with 20 Twitter accounts, 10 of which were known legitimate and 10 of which were previously identified disinformation accounts. Study participants were able to properly label the accounts with only 70% accuracy on average, which is not much better than a random guess.
One of the primary reasons that disinformation accounts on Twitter and Facebook accrue so many followers is that these platforms do not expose enough information about suspected inauthentic accounts. Without expert knowledge (and even with it in some cases), accounts that are about to be removed from the platform look as legitimate to an average user as any other account.
Division is the Point
Social media also creates ideological echo chambers by connecting users who tend to agree with each other, leaving little room for dissenting viewpoints. A 2017 study from Harvard found that the US left-right divide is exceptionally polarized, with few people having connections outside of their political positions.
Unfortunately, these ideological echo chambers make people more vulnerable to disinformation. Being part of a social media echo chamber can also distort your sense of reality, as it does for the 60% of millennials who cited Facebook as their primary news source in 2016. Making this problem even worse is the type of posts eliciting the most engagement: anger. A 2013 study found that anger is the predominant emotion attributed to the most-shared posts on Weibo, a Chinese Twitter equivalent. Outrage is well-known to be the emotion which causes the most engagement, hence the most clicks and ad revenue.
Modern disinformation campaigns leverage these facts to increase the partisan gap. Disinformation campaigns target specific hot-button issues on the left and right, attempting to further deteriorate dialog between two sides. For example, the same GRU disinformation team may create accounts supporting Black Lives Matter, and Blue Lives Matter movements. These actors are aware that rapid spread, coupled with a reticence to fact-check information that conforms to our prior beliefs, makes us vulnerable. The point is not for a specific point of view to win. The point is division.
You Can’t Believe Your Eyes
Trust in digital content is at an all-time low. In 2018, two-thirds of all Americans got their news from social media, and more than half of those said they expected the news to be “largely inaccurate”. In March 2020, Americans who got their news mostly from social media were most likely to report seeing made-up news, and least likely to successfully answer a question about when a vaccine to COVID-19 is most likely to be available.
There is no good mechanism to determine whether something seen online, especially on social media, is real or fake. The rapid-fire nature of social media further exacerbates this problem, allowing disinformation to spread, faster than fact checkers can correct it. This has been linked to serious real-world consequences such as inter-ethnic violence in India and Myanmar.
Malicious actors take advantage of these shortcomings. For example, on the night of May 31, 2020 actors widely suspected of being part of a Russian disinformation team used images from ABC show Designated Survivor to claim that there had been an explosion in Washington DC and there was now a city-wide blackout. This was done while most US-based reporters and fact-checkers were asleep, allowing the conspiracy theory to gain traction. It was widely debunked the following morning, but already significant damage had been done.
The notion of trust online is further being challenged by DeepFakes, which are highly realistic video and audio created out of thin air. The process of creating DeepFakes is easy and low-cost, requires minimal human intervention, and can be done at scale.
Since the inception of DeepFakes, many experts have warned about their implication on politics as troublemakers can easily create a video or audio of political targets saying controversial things. However, the real problem is doubt. Even though a politically targeted Deep Fake has not been encountered so far, the sheer possibility of their creation has already sowed doubts in people’s mind on what is real and what is fake. Such confusion has led to political crises in Gabon and Malaysia. Given the high stakes of politics, it is a matter of when, not if, we see the allegation of DeepFakes to spread misinformation and sabotage fair elections.
Trying Harder is Not the Answer
Disinformation is truly insidious, because it targets our very ability to tell truth from fiction. It preys on our weaknesses and our prejudices. To combat disinformation, it is not enough for users to be vigilant, especially when even experts are not clear what users have to be vigilant for. Major social media companies should invest more time and resources to provide better tools for their users to recognize and limit the spread of misinformation.
It is absolutely vital that we intervene together and institute reforms to tackle this issue, as half-measures are no longer enough. Only through a partnership of academics, technology researchers, social media companies, and journalists is such systemic change possible. Or as Benjamin Franklin said, “we must hang together, or surely we shall hang separately.”
Editorial note: Our articles provide educational information for you. NortonLifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about cyber safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses.
Copyright © 2020 NortonLifeLock Inc. All rights reserved. NortonLifeLock, the NortonLifeLock Logo, the Checkmark Logo, Norton, LifeLock, and the LockMan Logo are trademarks or registered trademarks of NortonLifeLock Inc. or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.
We encourage you to share your thoughts on your favorite social platform.