Posted: 7 Min ReadNorton Labs

Don’t Blame Your Uncle for Sharing Fake News. It’s Not His Fault!

An Analysis of the Effects of Disinformation and DeepFakes

In 2018, MIT released a study indicating that fake news stories spread faster than real news on Twitter. This research garnered significant press attention at the time, but one of the most important conclusions was often overlooked: bots spread real news and fake news at the same rate, while humans were responsible for spreading most of the disinformation. These results have unfortunately been repeatedly validated in the early days of the COVID-19 pandemic. Disinformation was repeatedly amplified by ordinary people across all social media platforms including Facebook and Twitter, independent from bots or non-human actors.

These findings point to fundamental problems in the way social media is constructed, and how that structure can be actively harmful at the most important moments such as worldwide emergencies and elections. Malicious actors leverage the lack of information exposed to ordinary social media users, the polarization exacerbated by social media, and the increasing cynicism about accurate information online in order to spread disinformation. Social media companies do not yet provide adequate tools for their users to distinguish between real news and misinformation. Instead social media services are optimized for engagement and controversy to maximize time spent on sites and therefore advertising revenue.

Hello, My Name Is Human

Meet Duncan Gilbert. Duncan, and his 900 friends, are part of a network of sockpuppets – accounts that pretend to be different people but are actually run by the same person - that Facebook took down in December 2019. Did you know he wasn’t real? Could you tell, based on the information Facebook showed you? Over 442,000 people engaged with content created by “Duncan” and his ilk.

Figure 1: Sockpuppet accounts removed by Facebook in December 2019. The profile pictures were likely generated using an AI/ML technique called a GAN [21,22]
Figure 1: Sockpuppet accounts removed by Facebook in December 2019. The profile pictures were likely generated using an AI/ML technique called a GAN [21,22]

Our original findings paint an even bleaker picture. Norton Labs conducted a study where we presented participants with 20 Twitter accounts, 10 of which were known legitimate and 10 of which were previously identified disinformation accounts. Study participants were able to properly label the accounts with only 70% accuracy on average, which is not much better than a random guess.

One of the primary reasons that disinformation accounts on Twitter and Facebook accrue so many followers is that these platforms do not expose enough information about suspected inauthentic accounts. Without expert knowledge (and even with it in some cases), accounts that are about to be removed from the platform look as legitimate to an average user as any other account.

Division is the Point

Social media also creates ideological echo chambers by connecting users who tend to agree with each other, leaving little room for dissenting viewpoints.  A 2017 study from Harvard found that the US left-right divide is exceptionally polarized, with few people having connections outside of their political positions.

Unfortunately, these ideological echo chambers make people more vulnerable to disinformation. Being part of a social media echo chamber can also distort your sense of reality, as it does for the 60% of millennials who cited Facebook as their primary news source in 2016. Making this problem even worse is the type of posts eliciting the most engagement: anger. A 2013 study found that anger is the predominant emotion attributed to the most-shared posts on Weibo, a Chinese Twitter equivalent. Outrage is well-known to be the emotion which causes the most engagement, hence the most clicks and ad revenue.

Modern disinformation campaigns leverage these facts to increase the partisan gap. Disinformation campaigns target specific hot-button issues on the left and right, attempting to further deteriorate dialog between two sides. For example, the same GRU disinformation team may create accounts supporting Black Lives Matter, and Blue Lives Matter movements. These actors are aware that rapid spread, coupled with a reticence to fact-check information that conforms to our prior beliefs, makes us vulnerable. The point is not for a specific point of view to win. The point is division.

You Can’t Believe Your Eyes

Trust in digital content is at an all-time low. In 2018, two-thirds of all Americans got their news from social media, and more than half of those said they expected the news to be “largely inaccurate”. In March 2020, Americans who got their news mostly from social media were most likely to report seeing made-up news, and least likely to successfully answer a question about when a vaccine to COVID-19 is most likely to be available.

There is no good mechanism to determine whether something seen online, especially on social media, is real or fake. The rapid-fire nature of social media further exacerbates this problem, allowing disinformation to spread, faster than fact checkers can correct it. This has been linked to serious real-world consequences such as inter-ethnic violence in India and Myanmar.

Yet, critics charge that social media companies have done little to comprehensively address these issues. While Facebook has been used for disinformation since at least 2011, it has struggled to provided tools for readers to determine whether something they are reading is true or false. Recently, Facebook instituted a so-called “virality circuit breaker” to help slow the spread of disinformation before fact-checkers have a chance to check the validity of a story. However, at the same time Facebook is also attempting to shut down NYU’s Ad Library transparency project, which aids journalists and researchers in investigating disinformation campaigns, citing violations of its terms of use and user privacy.

Malicious actors take advantage of these shortcomings. For example, on the night of May 31, 2020 actors widely suspected of being part of a Russian disinformation team used images from ABC show Designated Survivor to claim that there had been an explosion in Washington DC and there was now a city-wide blackout. This was done while most US-based reporters and fact-checkers were asleep, allowing the conspiracy theory to gain traction. It was widely debunked the following morning, but already significant damage had been done.

The notion of trust online is further being challenged by DeepFakes, which are highly realistic video and audio created out of thin air. The process of creating DeepFakes is easy and low-cost, requires minimal human intervention, and can be done at scale. 

Since the inception of DeepFakes, many experts have warned about their implication on politics as troublemakers can easily create a video or audio of political targets saying controversial things. However, the real problem is doubt. Even though a politically targeted Deep Fake has not been encountered so far, the sheer possibility of their creation has already sowed doubts in people’s mind on what is real and what is fake. Such confusion has led to political crises in Gabon and Malaysia. Given the high stakes of politics, it is a matter of when, not if, we see the allegation of DeepFakes to spread misinformation and sabotage fair elections. 

Trying Harder is Not the Answer

Disinformation is truly insidious, because it targets our very ability to tell truth from fiction. It preys on our weaknesses and our prejudices. To combat disinformation, it is not enough for users to be vigilant, especially when even experts are not clear what users have to be vigilant for. Major social media companies should invest more time and resources to provide better tools for their users to recognize and limit the spread of misinformation.

It is absolutely vital that we intervene together and institute reforms to tackle this issue, as half-measures are no longer enough. Only through a partnership of academics, technology researchers, social media companies, and journalists is such systemic change possible. Or as Benjamin Franklin said, “we must hang together, or surely we shall hang separately.”

References

  1. https://www.americanpressinstitute.org/publications/reports/survey-research/news-trust-digital-social-media/
  2. https://www.journalism.org/2020/03/25/americans-who-primarily-get-news-through-social-media-are-least-likely-to-follow-covid-19-coverage-most-likely-to-report-seeing-made-up-news/
  3. https://slate.com/news-and-politics/2015/08/hurricane-katrina-10-years-later-the-myths-that-persist-debunked.html
  4. https://www.vox.com/2018/7/19/17594156/whatsapp-limit-forwarding-fake-news-violence-india-myanmar
  5. https://www.rollingstone.com/culture/culture-features/misinformation-facebook-george-floyd-protest-1008909/
  6. https://www.wired.com/2016/12/photos-fuel-spread-fake-news/
  7. https://dash.harvard.edu/bitstream/handle/1/33759251/2017-08_electionReport_0.pdf
  8. https://www.nytimes.com./2020/08/27/technology/what-if-facebook-is-the-real-silent-majority.html
  9. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf
  10. https://www.isdglobal.org/isd-publications/reply-all-inauthenticity-and-coordinated-replying-in-pro-chinese-communist-party-twitter-networks/
  11. https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499
  12. http://science.sciencemag.org/content/359/6380/1146.full
  13. https://cpb-us-e1.wpmucdn.com/sites.suffolk.edu/dist/1/170/files/2009/08/Movement-Began-With-Outrage.pdf
  14. https://www.nytco.com/press/adobe-new-york-times-company-twitter-announce-content-authenticity-initiative/
  15. https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/
  16. https://www.theguardian.com/us-news/2016/oct/01/millennials-facebook-politics-bias-social-media
  17. https://arxiv.org/pdf/1309.2402v1.pdf
  18. https://www.nytimes.com/2014/07/06/fashion/social-media-some-susceptible-to-internet-outrage.html
  19. https://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/?utm_source=AdaptiveMailer&utm_medium=email&utm_campaign=18-9-10:%202018%20News%20Use%20Across%20Social%20Media%20Update&org=982&lvl=100&ite=3070&lea=684694&ctr=0&par=1&trk=
  20. https://cyber.harvard.edu/publications/2017/08/mediacloud
  21. https://about.fb.com/news/2019/12/removing-coordinated-inauthentic-behavior-from-georgia-vietnam-and-the-us/
  22. https://www.wired.com/story/facebook-removes-accounts-ai-generated-photos/
  23. https://www.npr.org/2020/09/22/915676948/can-circuit-breakers-stop-viral-rumors-on-facebook-twitte
  24. https://www.wsj.com/articles/facebook-seeks-shutdown-of-nyu-research-project-into-political-ad-targeting-11603488533

Editorial note: Our articles provide educational information for you. NortonLifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about cyber safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses.

Copyright © 2020 NortonLifeLock Inc. All rights reserved. NortonLifeLock, the NortonLifeLock Logo, the Checkmark Logo, Norton, LifeLock, and the LockMan Logo are trademarks or registered trademarks of NortonLifeLock Inc. or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.

About the Author

Daniel Kats

Senior Principal Researcher

Daniel earned his Masters at the University of Toronto Systems & Networking Group. His research involves building machine learning systems for security, and the subtle impact of those systems on the people who use them.

About the Author

Dr. Saurabh Shintre

Senior Principal Researcher NortonLifeLock Research Group

Saurabh's research interests lie in the areas of cryptography, web & network security, and machine learning. He has published over 20 papers and patents in the areas of security and privacy and holds a PhD in computer security from Carnegie Mellon University.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.