Three Companies We All Do Business With Can End Deepfake Fraud if They Wanted To
Is is real or AI generated, your GPU manufacturer knows.

Three Companies We All Do Business With Can End Deepfake Fraud if They Wanted To

All we have to do is demand it!


More and more I’m hearing from CISO’s on the issue of enabled Deepfake fraud.? Moving beyond simply using LLMs to improve phishing campaigns, we’re seeing hackers employ AI to mimic the voices and appearance of peoples relatives or co-workers. One such case @Dave Aron and I covered in our Forthcoming Gartner book on Malinformation was that of a UK CEO who was fleeced out of a quarter million pounds by responding to a request made by his boss, whose voice and speech cadence he recognized.

?

While we as an industry are struggling to imagine how we’re going to adapt to this new and brazen mechanism for wide-spread fraud, I want to point out that there is an inherent vulnerability in the Malinformation Supply chain when it comes to Deep Fake Fraud.

The GPU is Deepfakes Achilles Heel

Simply put, it is functionally impossible to produce a deepfake video without using a GPU.? Indeed, even deepfake audio is accelerated by the kinds of matrix processing found in modern GPUs.? The need for GPUs to make deepfakes (particularly images and video), and the fact that GPUs are made by just 3 companies, NVIDIA, AMD and Intel is a vulnerability the industry can exploit to crush deepfake fraud before it becomes widespread.?

And all we have to do as customers is demand it.

Adobe Kicks Things Off

Apropos of this today, Adobe announced that they’ve developed a way of signing documents with what they call “a nutrition label” which will describe the degree of manipulation done to an image, and this is a great starting point.? What about hackers? Wont they be able to strip these off? Sure, but that's not really an issue.

Cracking AI watermarks is not the problem researchers make it out to be.

Sure, academic researchers have been able to crack every kind of AI watermarking thrown at them so far, but that’s thinking about the problem backwards. ?Of course hackers will devise clever ways to strip watermarks, digital signatures, DRM and the like, that’s what they do, godbless ‘em, but the absence of a watermark stating the degree of video manipulation would be an immediate indicator of compromise – just flip the problem around, we presume everything is altered unless it can prove it wasn’t --which already happens with editorial photography. ?Spoofing a valid digital signature watermark is a zillion times harder than stripping one off entirely.? Again we can use this to our advantage.

With video we’re dealing with SO MUCH MORE information in which we can embed data, there is already security built into many CODECs for DRM, it would be relatively straight forward for GPU makers to embed in the video stream the degree of manipulation or graphics processing done to an image – without even having to bother with dealing with all the various video editing and processing software further down in the processing chain – this is a driver update away from being real, demand it!


Shout out to Nader Henein who keeps tabs on this sort of stuff on our team, and my co-author, Dave Aron whose thinking is everywhere in this article.


The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.


Dave Aron

Unthinker. Research Fellow, Vice President and Distinguished Analyst at Gartner

1 年

#awesomenes as usual Leigh

要查看或添加评论,请登录

Leigh McMullen的更多文章

社区洞察

其他会员也浏览了