The technologies used to create deepfake media — artificial audio, images and video generated by deep learning neural networks — have advanced significantly in the last two years. Many security experts have warned of the potential impact this false media could have on upcoming elections. Some researchers have begun to analyze the threats posed to the business world and global markets. Even politicians are starting to take notice of how the convincing videos could be used to manipulate public opinion. As a result, research into detecting and preventing the creation of deepfake media is also advancing.

The Technology Behind Deepfakes

Deepfake videos pose one of the most significant threats since they can literally fool our eyes and ears into believing events that never actually occurred. These videos are produced using generative adversarial networks (GANs), in which two competing neural networks are pitted against each other. The first, called the generator, uses still images of a subject to attempt to insert their face into existing target video frames. The second network, the discriminator, attempts to identify the fake frames. Ultimately as the generator attempts to fool the discriminator, it learns and becomes more adept at producing realistic frames, resulting in a deepfake video.

How AI Detects Deepfake Videos

So, if a trained neural network can be fooled, how can artificial intelligence (AI) help defend against this threat?

1. Blink Speeds

As deepfake technology began to emerge, researchers from University at Albany, SUNY found that deepfake videos could be detected with AI by examining blinking patterns of the subject in the video. Because GANs use still images as their source material, and since few photos are published of someone blinking, the act of blinking was not well replicated in the generated videos. The researchers were able to identify this flaw with AI networks. However, within months of the release of their research, new videos began to emerge that started addressing the issue. Analysis of blink speeds and rates can still detect many deepfake videos, but the accuracy of these techniques is decreasing.

2. Facial Warping

Another limitation of deepfake GANs is that, because of current processing resource constraints, the still images used must be processed at a common fixed resolution. However, the size and, therefore, resolution of a face in video changes as camera angles shift or zoom in and out. Again, researchers from University at Albany found a way to exploit this weakness. They were able to train their AI networks to detect artifacts in the warping of facial features as a result of the changing resolutions. Through their work, they were able to achieve more than 90 percent accuracy in detecting fake videos.

3. Contextual Clues

What if, instead of analyzing individual frames of a video for artifacts, we could analyze the full context of the video? Early this year, joint researchers from UC Berkeley and USC released a study in which they trained AI to understand behavioral characteristics of subjects. Through this work, they were able to identify patterns in how speakers’ faces, tone, posture and other characteristics change in relation to the information they’re conveying. Since GANs use still images to produce the videos, it is virtually impossible for them to learn and reproduce these behaviors. As a result, the researchers announced a 95 percent accuracy rate and predicted it would be as high as 99 percent by the time the 2020 U.S. primary election season begins.

An Ounce of Prevention

While detection techniques are impressive, it can be difficult to battle disinformation once it is in the public domain. For this reason, researchers are trying to find ways to disrupt the creation of deepfake videos in the first place. In June 2019, Cornell University released the results of a study that focused on a unique approach for preventing deepfake video creation. The researchers found that by inserting digital “noise” into a photograph they could disrupt the facial recognition capabilities GANs rely on to learn the faces of their deepfake subjects. The noise, which is undetectable by the human eye, tricks the facial recognition libraries into misidentifying faces in the modified photos. Soon, cameras could be equipped to insert this noise into digital photos, making it much harder for them to be used in deepfake creation.

The threats from deepfake videos to the core pillars of our society are very real. However, researchers continue to do their part to help combat these threats. As the creators leverage greater capabilities of AI to produce more lifelike media, defenders can use those same capabilities to help detect and disrupt their activities. While there will always be more research to be done on both sides of this battle, there is reason to be confident that we can effectively mitigate the risks being created by this technology.

Learn more about CDW’s cybersecurity services and solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.