Stealing Identities 2.0

Stealing Identities 2.0

Has everybody already seen this deepfake video of Morgan Freeman ? It is mind-blowing. And it`s the beginning. This time around I am not praising the technology. I am worried, thinking about a few possible scenarios.

Deepfake technology is already being used and is likely to become more widely available in the future. Leading to a rise in the number of deepfakes being created and circulated.

No alt text provided for this image
https://www.villamedia.nl/artikel/deepfakes-wat-als-artificial-intelligence-je-voor-de-gek-houdt

For starters, let's assume somebody wants to discredit a public figure. Whomever. Provided there is sufficient available video and photo material of that person, it is becoming easier and cheaper by the day to use generative AI to create a completely fake recording of that person. With their own voice. Imagine a fake video of Elon Musk declaring that he doesn't believe in Bitcoin anymore and is about to sell all of what Tesla still owns. How would the market react? I bet that right now there are at least a few alternatives to that scenario in your head.

One more that I want to touch on is elections. We all know how important the role media play in every campaign. How much public opinion sways based on news, voice recordings, photos, posts, videos, tweets, and what have you of a candidate, his family, friends, or associates? With AI technology, it is now possible to create convincing fakes of a candidate. This could lead to a situation where people vote for somebody who never actually made the statements attributed to them. Or where a candidate's reputation is damaged by false information. Or even a change of Government.

Somewhere in the middle of the GPT frenzy, we saw tools popping up that claim to be able to distinguish between human-written and AI-written content. How? Using AI. ?? Putting aside the real ability of these tools to achieve that, are we going to need another set of applications that will help “unfake” videos and voice recordings? If so, can we actually expect tens or hundreds of millions of people consuming content every day to pause and check each video they are viewing? Assuming of course that the validation tools are reliable and cannot be tampered with, making them fake their fake detection.

No alt text provided for this image
Own prompt in Stable Diffusion

It will be - arguably it already is - unbelievably easier to spread false information and manipulate public opinion using new technologies. Fakes can then damage the reputation of individuals, impact financial markets, and spread disinformation. The entire Watergate scandal will seem like child`s play compared to what is or soon will be possible.

And by the way - Large Language Models like GPT3 can be trained to imitate someone`s writing style. Hence social media written content is also prone to being much more convincingly faked.

As Aubrey Strobel put it in one of her tweets : “As a society, I don’t think we’re freaking out enough about deepfake technology.”

Pawel Plocki

AI freak | process freak | Managing Director of Global Business Services Europe

1 年

And a view from OpenAI on potential misuses of LLMs https://openai.com/blog/forecasting-misuse/

回复
Mariusz Pietrzak

Experience SSC/GBS/BPO Executive Leader, fan of digitalization & process transformation

1 年

Indeed Pawe?, technology is so powerfull and can bring so many unprecedensed opportunities as well as threats. We may create brand new song of Frank Sinatra or Michael Jackson, release new movie with James Dean - in fact celebrities could be active forever. On the contrary we may see presidents or politicans in misbehaved situations so in fact fake news can brainstorm minds of people prior to elections.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了