Updater
October 30, 2023 , in technology

Are Deepfakes Getting Dangerous?

Deepfakes are nothing new, but the power of the latest AI tools and their wide availability have significantly raised the threat level - as well as offering improved methods of detection.

Eidosmedia Deepfakes

Deepfakes | Eidosmedia

Tech commentators have been alarming readers with the dangers of deepfake images and videos for over a decade now. But, as AI-generated content goes mainstream, anyone with an internet connection and a credit card can work with AI to generate images and videos with unprecedented levels of realism.

Fortunately, AI-driven tools are also emerging to help find and combat deepfakes. In this article, we look at the potential dangers that deepfakes represent and how AI can help in preventing their abuse.

What is a deepfake?

According to a Forbes article that tried to raise the warning flag in 2020, “A combination of the phrases ‘deep learning’ and ‘fake’, deepfakes first emerged on the Internet in late 2017, powered by an innovative new deep learning method known as generative adversarial networks (GANs).”

A report from startup Deeptrace found that in early 2019 there were 7,964 deepfake videos online. Nine months later, that figure had nearly doubled to 14,678. The problem is proliferating.

By 2020, internet audiences had already seen several deepfakes go viral. One showed Mark Zuckerberg admitting Facebook wants to manipulate and exploit its users, and another depicted Bill Hader morphing into Al Pacino on a talk show. Clearly, some are more believable and, therefore, more dangerous than others. For instance, there was yet a third video that showed President Obama using an expletive to refer to former president Trump, and it’s not hard to imagine how that could have led to more serious consequences than the Hader/Pacino video.

AI deepfake generation

How are these videos created? Neil Sahota explains, “To produce a convincing deepfake video, two machine learning models are utilized: one generates fake videos from a dataset of sample videos, and the other identifies whether the video is real or fake.” This training setup is known as a Generative Adversarial Network (GAN).

"The GAN technique trains these two models to compete against each other until the second model can no longer distinguish between real and fake videos. The outcome is a deepfake that appears realistic to human viewers.”

Because massive amounts of data — in this case, real video of existing people — are needed to train the AI to produce convincing deepfakes, celebrities and politicians are popular targets.

Funny - but potentially dangerous

The examples above illustrate a variety of possible uses for deepfakes. On the one hand, Bill Hader's morphing into Al Pacino is entertaining, and maybe a little mind-blowing, but isn’t dangerous. On the other hand, the Obama video illustrates just how easy it would be for bad actors to use this technology to create misleading news clips.

Forbes warns, “Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea. In a world where even some uncertainty exists as to whether such clips are authentic, the consequences could be catastrophic.”

Not only could videos like these lead to violence, but if they fool journalists, news organizations run the risk of losing credibility with their audiences. As the technology gets better, the consequences of deepfakes grow more serious. Luckily, AI technology also offers remedies.

Detecting deepfakes

When it comes to low-quality deepfakes, a little bit of discernment from viewers can help. Lip syncs may not match up, and skin or hair may seem off. “Studies have also shown that jewelry, teeth and skin that create erratic reflections can also reveal deepfakes,” according to University of Nevada, Reno’s Nevada Today.

There are some other details people can pay attention to if they suspect a video might be fake. According to Telefonica, look for:

  • Face and body — Humans are hard to replicate, “So, one way to detect forgery is to identify incongruities between the proportions of the body and face, or between facial expressions and body movements or postures.”
  • Video length — “A quality fake requires several hours of work and training of the algorithm, so fake videos are usually only a few seconds long.”
  • Inside the mouth — “The technology to generate deepfakes is not very good at faithfully reproducing the tongue, teeth and oral cavity when the person speaks. Therefore, blurs inside the mouth are indicative of a false image.”

If you want to test your deepfake detection skills, Massachusetts Institute of Technology (MIT) has created a site to help. The goal is to help “ordinary people think critically about the media that they consume.”

As it gets harder for the human eye to detect these issues, machines are stepping in. Often, the same computer scientists developing new AI tools to generate content also keep detection in mind. “For example, some companies specialize in both creating and detecting deepfakes using large, multi-language datasets,” reports Morgan Stanley. “Others use troves of data to create deepfake detectors for faces, voices and even aerial imagery, training their models by developing advanced deepfakes and feeding them into the models’ database.”

Today, Intel says its Real-Time Deepfake Detector, FakeCatcher, can detect deepfakes with 96% accuracy. Spectrum says Intel does this, in part, by studying “color changes in faces to infer blood flow, a process called photoplethysmography (PPG). The researchers designed the software to focus on certain patterns of color on certain facial regions and to ignore anything extraneous.” However, as the technology to detect deepfakes evolves, so too do the deepfakes themselves. With that in mind, it seems regulation is in order.

First steps in deepfake legislation

In the U.S., ten states already ban some deepfakes, usually pornography. NPR reports that Texas and California also have laws barring deepfakes targeting candidates for office. Copyright law may also be useful in the fight against deepfakes. Not only did Universal Music use this to get the song impersonating Drake and The Weeknd’s voices pulled, but the Writers Guild of America (WGA) and the Screen Actors Guild (SAG) are striking to fight studios that, among other things, want to use the likenesses of background actors as deepfake background actors in perpetuity.

Meanwhile, NPR reports, “The Biden administration and Congress have signaled their intentions to do something. But as with other matters of tech policy, the European Union is leading the way with the forthcoming AI Act, a set of rules meant to put guardrails on how AI can be used.”

In a world dominated by social media platforms, it’s hard to imagine how these regulations could keep bad actors outside of these regions from creating a deepfake, posting it to YouTube, Facebook, or TikTok, and waiting for it to go viral. That is why it’s increasingly important for journalists and the platforms that are likely to host these deepfakes to be aware of the phenomenon — and armed with the tools needed to find fakes and expose them.


Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH