Media Forensics: Combating the increasing unreality of content

The fast-moving algorithm war on video fakes.

Full Transparency

Our editorial transparency tool uses blockchain technology to permanently log all changes made to official releases after publication. However, this post is not an official release and therefore not tracked. Visit our learn more for more information.

Learn more

Fake videos can be malicious, damaging and dangerous. Luckily, they can often be detected and debunked in the blink of an eye. Literally. Fake videos are becoming more realistic and convincing as technology advances, but no matter how sophisticated, they usually have some telltale sign that indicates if they’re real or fake. For example, the rate at which a person blinks is often used to flag fake videos.

As fake videos proliferate, experts in the field of digital media forensics are racing to stay one step ahead and develop tools to detect when a video has been digitally manipulated or fabricated. Distinguishing fake from real is more urgent than ever as the ability to make fake videos that look, sound and feel authentic—and to pass them off as the real thing—becomes easier. In 2018, new software became available that enables virtually anyone to manipulate videos and distribute false messages online.

The term for these videos implies a new level of sophistication and misinformation. They’re not just fake; they’re deepfakes—incredibly authentic-looking and hard to detect with the naked eye.

Deepfake technology uses artificial intelligence and machine learning to manipulate videos and make people appear to say or do something that never happened in real life. The technology allows users to swap faces in a video and thus potentially put anyone in a completely false situation or scenario. In the past year, deepfake technology was even used to create fake pornographic videos of celebrities.

As deepfake software becomes more sophisticated and fake videos become more convincing, their prevalence is a cause for real concern. In politics, for instance, deepfakes could be used to make it appear that a politician said or did something that in fact never occurred. The technology may also be used to create believable fake news videos or perpetrate malicious hoaxes.

Siwei Lyu, a professor of computer science at the University at Albany, State University of New York, is at the forefront of the battle to detect and debunk deepfakes. Lyu and his colleagues developed a tool that analyzes how often a person blinks in a video to verify its authenticity.

“Deepfakes are completely synthesized by a machine learning algorithm and that algorithm leaves certain fingerprints or traces in the final video,” Lyu explains. By looking for those signs, one can tell if a video is fake.

One of the most telling “fingerprints” is that the subjects in fake videos blink their eyes a lot less frequently than real people do, according to Lyu. That’s because deepfake algorithms use many images of a person’s face to learn how to synthesize it into a fake video—but it’s rare to find images that show people with their eyes closed. The algorithm needs source images of open and closed eyes to make someone in a fake video blink like a real person.

“The deepfake algorithm has a lot of trouble synthesizing a closed eye because that is a concept the model does not have. So we can use that flaw to determine whether a video is real or fake,” Lyu says. The method Lyu and his team developed to detect the blinking rate in a video has had a 95 percent detection rate, according to Lyu.

Naturally deepfake videos bring up national security concerns, which is why Los Alamos National Lab is developing tools to detect deepfakes. One such technique is a test for “compressibility,” or how much information an image contains. Fake images are simpler than real ones because they reuse visual elements. A human watching a video won’t notice this, but an algorithm can recognize this sort of visual repetition and flag the video as fake.

Of course, it’s no surprise that for every detection tool that’s released, deepfake creators will find a way around it. “Once we have a detection algorithm, people very quickly take note and develop counteracting methods,” Lyu says. Once Lyu published a paper on his blinking research, deepfakes began to have more normal blinking, making them harder to detect. So Lyu and other researchers must constantly improve their detection techniques.

One of Lyu’s latest approaches is to analyze face orientation, which is often wrong or inconsistent in deepfake videos. Other researchers have developed detection algorithms that look at lighting and shadows, which is often unnatural in deepfakes as well.

“It’s an ongoing battle,” Lyu says. “We probably only have a temporary advantage at this point as both sides will keep improving.”

But even the most advanced detection tools won’t stop all deepfakes from making their way into the mainstream. So perhaps the key to combating them is public awareness and vigilance.

“It is crucial to make the public aware of the existence of these digital fakes,” Lyu says. “Once people realize there is the potential that any digital visual media may be completely synthesized from an algorithm, it becomes harder and harder to fool the general public. Public awareness is probably the best protection in the long run.”

For related media inquiries, please contact story.inquiry@one.verizon.com

Related Articles

06/04/2019

Verizon Media CISO Chris Nims shares some tips to keep your account secure.

05/06/2019

Consumers want Congress to take action that protects their privacy. How do we know? We asked them. Read about our public opinion poll.