Deepfakes: what they are, how they work and how to protect against malicious usage in the digital age


December 06, 2023

Promo Protect all your devices, without slowing them down.
Free 30-day trial
Deepfakes: what they are, how they work and how to protect against malicious usage in the digital age

What are deepfakes?

Deepfakes are synthetic media (photos, videos and audio recordings) created using artificial intelligence (AI). In the most common scenarios, real data is manipulated to create highly convincing content that can be very hard to distinguish from legitimate media. For example, the likeness of a person in an existing image or video (including their face and voice) is replaced with someone else’s. This process can create highly convincing fake videos and audio recordings that appear to show individuals saying or doing things that, in reality, they never did.

“A common setup in deepfake creation is altering a person’s face by reenactment, replacement, editing or synthesis using techniques known as face swap, face transfer, facial attribute manipulations or inpainting,” Bitdefender researchers explain. “These approaches result in local manipulations and are traditionally GAN-based. However, recently, a new class of methods – denoising diffusion probabilistic models – have shown impressive generative capabilities raising new concerns regarding the authenticity of the images we see every day on the Internet.”

Can deepfakes be dangerous?

Yes, the technology to create deepfakes is highly accessible and the risk of misuse in cybercrime, social media impersonation, political propaganda and disinformation is very high. Threat actors and other malicious individuals can use this fake content to create false narratives, blackmail or impersonate public figures to conduct fraud or damage the reputations of individuals, among others.

How are deepfakes detected?

There are many ways to detect a deepfake image or video, including:

Visual inspection: Deepfake images and videos can contain particular artifacts or abnormalities that are not present in real videos such as flickering, distorted images or mismatched lip movements.

Metadata analysis: The metadata in a digital file can be used to trace its origin and determine authenticity. By analyzing the metadata of a video, it can be possible to determine whether a file was manipulated or edited.

Forensics analysis: Forensic techniques such as analyzing video patterns and audio, and video comparison can also be used in the detection of a deepfake.

Machine learning: Machine-learning algorithms can be trained on large datasets of real and fake videos to help classify new videos as either fake or real ones.

It is important to note that, as fake AI-generated and face-swapping videos become more convincing, it will become even harder to fully and correctly assess their legitimacy. As such, a more accurate assessment may employ the use of a combination of all the methods above.

However, Bitdefender computer vision scientists are constantly working on new deepfake detection methods for manipulated images.

How can common internet users tell the difference between a legitimate video and a deepfake?

Before delving into the particularities and visual clues of deepfakes, we want to emphasize that fake media files can be based on:

  • Pictures and videos of real people that have been digitally manipulated to make it seem like the individuals said or did things that have not happened in reality
  • The creation of completely new identities that do not exist in real life

Here’s what you need to pay attention to:

Visual and auditory artifacts

  • Voice-mouth movement synchronization is unnatural.
  • The video looks very unnatural if you slow it down.
  • Image blurring, lack of shadows, artificial lighting.
  • Face symmetry inconsistencies such as unnatural eyes, ears, teeth, unnatural hair, and skin that looks too smooth or too wrinkly. You should also note that some of these visual cues are no longer so evident in deepfakes that are more recent.
  • The person in the video blinks too fast or too slow, shows unnatural eye movement or lack of it.
  • Eyeglasses have an unnatural reflection that does not move at the same time as the individual’s face.
  • The voice in the video sounds synthesized and pauses at inappropriate moments.
  • Lower quality for generated voice (lower bitrate) or different pronunciation for certain words.
  • Inconsistency between the message sent and the individual’s facial expression, or lack of emotion.

How do scammers use deepfake technology in fraud?

The implications of deep fakes in the context of cybersecurity and online scams are very significant. Since the technology to create deepfakes is available for individual usage across the globe, AI-generated content is currently being used with malicious intent, including:

  • Scams and fraud on social media platforms such as Facebook, YouTube, Twitter and Instagram
  • Damaging the reputation of individuals by creating compromising videos or images
  • Bypassing biometric passwords through image and sound manipulation
  • Spreading fake news and disinformation (E.g. Ukraine war and Palestine-Israel war)
  • Blackmail and extortion
  • Identity theft

Note: Non-malicious deepfakes do exist. For example, Hollywood has been using AI-generated videos in movies to either age or de-age actors for their roles.

6 tips to help you protect yourself from risks stemming from deepfakes

1. Be Skeptical: Maintain a critical spirit whenever you approach sensational or controversial videos or audio clips on social media, especially if the source is not reputable. If something sounds "too good" to be true or the information is sensitive (for example messages from the authorities, service offers or material gains) automatically make additional checks.

2. Verify Sources: Check information in multiple trusted sources before believing or sharing information that could be based on a deep fake. Check text, video and audio.

3. Use Technology: Use security tools like Bitdefender's solutions that can help detect and block phishing attempts and other illicit activities that could leverage deep fakes.

4. Stay Informed: Educate yourself about the latest developments in deepfake technology, new scams and the methods used to detect them.

5. Protect Your Identity: Use services such as Bitdefender Digital Identity Protection to monitor and get alerts if your personal information is used online, which could include the misuse of your likeness in deep fakes. Bitdefender Digital Identity Protection also gives you the ability to sniff out social media impostors who could make use of your identity to ruin your reputation or conduct scams in your name.

6. Report Suspicious Activity: Whenever you encounter deepfakes (videos, photos or audio) or are a victim of impersonation, report it to the social media platform and the authorities such as the Internet Crime Complaint Center (IC3), and local police.

Remember, the more proactive you are in protecting your digital identity and privacy, the less likely you are to fall victim to malicious use of deepfakes and online impersonations.

Take control of your online privacy and protect your IMPORTANT with Bitdefender today!




Alina is a history buff passionate about cybersecurity and anything sci-fi, advocating Bitdefender technologies and solutions. She spends most of her time between her two feline friends and traveling.

View all posts

You might also like