Posted: 4 Min ReadNorton Labs

What is a deepfake anyway?

A deep dive into what a deepfake is and how to spot them

Hello, my name is Becky Sanders. I studied at Central University in downtown USA, my interests include being human, and I really want to be your friend on LinkedIn. This is my profile picture. Can’t you tell I’m real?

Unfortunately, I’m not Becky. In fact, Becky Sanders doesn’t exist. Her profile photo is a deepfake—an artificially created piece of media, usually a photo or video. Sometimes deepfakes create a video of a person doing something that they have never done. Other times photos are created of people who have never existed, by blending many faces of real people together. Newer deepfakes can even replicate a person’s voice to make them say something they might never say in real life.

The end products can often look very realistic, because the they are powered by a cutting-edge technology in deep learning called a Generative Adversarial Network (GAN). In fact the term “deepfake” is a is a combination of the terms “deep learning” and “fake.” But the technology is still far from perfect. Especially with video, most deepfakes have subtle imperfections that can be spotted if you know where to look.

How to Spot Deepfakes

If you’re worried that a photo or video may be a deepfake, there are some simple telltale signs. Specifically, you should look for digital incongruities, which are areas that look like they don’t belong together. This is like the puzzles for children, “what’s wrong with this picture.” For example, a swing with only one rope or a reflection in the mirror facing the wrong way. In deepfakes, the incongruities can be glasses that don’t fully reach the ear or a beard that doesn’t move together with the face when the person talks. Lips are another common place to find errors —often deepfakes have lips which do not look natural or match the person’s other facial features.

You may also look for digital artifacts—items that are difficult for computers to generate correctly right now. This could be facial textures (the face, especially the forehead, might be too smooth), shadows (might be in the wrong place), or unrealistic eyebrows.

Photo Credit: AP Photo
Photo Credit: AP Photo

Finally, in animated videos, pay attention to blinking. Does the person appear to blink too little or too much?

What Kind of Deepfakes Exist?

Early deepfakes focused on changing faces in videos. For example, replacing the face of an actor with that of another person in a movie. These deepfakes, though not very convincing, had severe negative consequences when they were deployed without people’s consent. This is especially true because many people are still not aware that deepfakes can be created.

Immediately, artificial intelligence researchers realized that this technology could be used to impersonate world leaders and other influential figures. They therefore showed a proof of concept in 2018: a deepfake video of former President Barack Obama warning about the dangers of deepfakes.

However, in this video, the comedian Jordan Peele voiced Obama. This is because realistic-sounding audio is still several years away. Even with advanced technologies the generated voice can sound metallic if you listen closely.

Using the tips above, you can see that this video is fake. For example, President Obama’s forehead is abnormally smooth. Next, the wrinkles around his mouth do not move naturally as he speaks. Finally, the shadows on his cheekbones appear unnatural compared to the direction of the light in the rest of the video.

Another use of deepfakes has been for generating fake profile pictures for sock puppet accounts, or a fictitious online identity created for the purposes of deception on. For example, a profile picture was generated for LinkedIn for Katie Jones, who does not exist. This was almost certainly for the purposes of state-sponsored espionage. The same technique has also been used on Twitter and YouTube for an army of sock puppet accounts.

Photo Credit: PC Magazine
Photo Credit: PC Magazine

More recently, deepfakes have been deployed in Russia’s invasion of Ukraine, forging a video of President Volodymyr Zelensky surrendering. This deepfake was deployed on a hacked Ukrainian news website and widely disseminated from there. Again, if you know what to look for, it is obvious this video is a fake. The beard does not move consistently with the face, and the skin tone of the neck does not match that of the face. The face is cropped in an awkward way to prevent looking at the forehead and hairline, where it is easiest to detect fakes. Facebook has identified and taken down this video.

There have been other deepfakes deployed in this conflict, including one of Vladamir Putin announcing peace with Ukraine. Again, Putin’s forehead is unnaturally smooth, and the movement of his cheeks and mouth look unnatural. Also, the hairline also does not look realistic.

Future of Deepfakes

Going forward, we can expect deepfakes to be integrated into broader information warfare and the disinformation landscape. The Zelensky video in particular represents the union of hacking, creating the deepfake, and disseminating that video by state-sponsored propaganda channels, as well as various social media bots.

We can also anticipate the use of deepfakes in catfishing, phishing, and other threats that combine traditional cybersecurity with human emotion.

To counter these threats, we must take an equally holistic approach: secure our end-points and servers with cutting-edge cybersecurity, detect deepfakes using both computer-assisted and manual methods, and stop the spread of disinformation by bots and sock puppets on social media.

Editorial note: Our articles provide educational information for you. NortonLifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about cyber safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses.

Copyright © 2022 NortonLifeLock Inc. All rights reserved. NortonLifeLock, the NortonLifeLock Logo, the Checkmark Logo, Norton, LifeLock, and the LockMan Logo are trademarks or registered trademarks of NortonLifeLock Inc. or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.

About the Author

Daniel Kats

Senior Principal Researcher

Daniel earned his Masters at the University of Toronto Systems & Networking Group. His research involves building machine learning systems for security, and the subtle impact of those systems on the people who use them.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.