BBC puts new deepfake detector to the test - BBC News

BBC puts new deepfake detector to the test – BBC News

  • By James Clayton
  • North American technology journalist

In March last year, a video surfaced showing President Volodymyr Zelensky telling the Ukrainian people to lay down their arms and surrender to Russia.

It was a fairly obvious deepfake, a type of fake video that uses AI to swap faces or create a digital version of someone.

But as developments in AI make deepfakes easier to produce, quickly detecting them has become all the more important.

Intel believes it has a solution and it’s all about blood on the face.

The company called the system “FakeCatcher”.

In Intel’s luxurious and mostly empty offices in Silicon Valley, we meet Ilke Demir, a researcher at Intel Labs, who explains how it works.

“We ask ourselves what is real about authentic videos? What is real about us? What is the watermark of being human?” she says.

At the heart of the system is a technique called photoplethysmography (PPG), which detects changes in blood flow.

Faces created by deepfakes don’t give these signals, he says.

The system also analyzes eye movement to verify authenticity.

“So normally, when humans look at a point, when I look at you, it’s like I’m shooting beams from my eyes, at you. But for deepfakes, it’s like googly eyes, they’re divergent,” he says.

By looking at both of these traits, Intel believes it can tell the difference between a real video and a fake one in seconds.

The company claims that FakeCatcher is 96% accurate. So we asked to try the system. Intel agreed.

We used a dozen clips of former US President Donald Trump and President Joe Biden.

Some were real, others were deepfakes created by the Massachusetts Institute of Technology (MIT).

video caption,

Watch: BBC’s James Clayton puts a deepfake video detector to the test

In terms of finding deepfakes, the system seemed to be pretty good.

Mostly we chose lip-sync fakes: real videos where the mouth and voice had been altered.

And he got all the right answers, except one.

However, when we got into real and authentic videos, it started to have a problem.

Time and time again the system claimed a video was fake, when in fact it was real.

The more pixelated a video is, the harder it is to detect blood flow.

The system also does not analyze the audio. So some videos that sounded quite obviously real when hearing the voice were classified as fake.

The concern is that if the program says a video is fake, when it’s genuine, it could cause real problems.

When we bring this point to Ms. Demir, she says that “verifying that something is false, versus ‘be careful, this may be false’ has a different weight.”

He’s saying the system is overly cautious. Better to catch all the fakes – and even some real videos – than lose the fakes.

Deepfakes can be incredibly subtle — a two-second clip in a political campaign ad, for example. They can also be of low quality. A fake can only be done by changing the voice.

In this regard, FaceCatcher’s ability to work “in nature” – in real-world settings – has been questioned.

Image caption,

A collection of deepfakes

Matt Groh is an assistant professor at Northwestern University in Illinois and a deepfake expert.

“I don’t doubt the stats they listed in their initial assessment,” he says. “But what I doubt is whether the statistics are relevant to real-world contexts.”

This is where it becomes difficult to evaluate the technology of FakeCatcher.

Programs such as facial recognition systems often provide extremely generous statistics for their accuracy.

However, if actually tested in the real world, they can be less accurate.

Essentially, the accuracy depends entirely on the difficulty of the test.

Intel says FakeCatcher has undergone rigorous testing. This includes a “wild” test – in which the company pieced together 140 fake videos – and their real counterparts.

In this test, the system had a 91% success rate, says Intel.

However, Matt Groh and other researchers want to see the system independently analyzed. They don’t think it’s good enough that Intel is setting up a test for itself.

“I’d like to evaluate these systems,” says Groh.

“I think that’s really important when we’re designing audits and trying to figure out how accurate something is in a real-world setting,” she says.

It’s amazing how difficult it can be to distinguish a fake video from a real one, and this technology definitely has potential.

But from our limited testing, it still has a long way to go.

Watch the full report on this week’s episode of Click

#BBC #puts #deepfake #detector #test #BBC #News
Image Source : www.bbc.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *