How To Break Into a Bank Account With AI: Your Voice Is My Password

How To Break Into a Bank Account With AI: Your Voice Is My Password

?? Unpack this week’s edition of Ctrl + Alt + Comply – your dose of news in the tech and compliance landscape. Happy reading!


Sorry, Benedetto, but I need to identify you,” was the sentence that single-handedly prevented Ferrari from becoming the victim of a sophisticated deepfake scam. We’ve all heard of ‘my voice is my password’ as a form of biometric security authentication. However, with the rise of sophisticated AI and deepfake fraud, voice authentication is increasingly becoming a point of high risk and vulnerability – as was showcased by the recent Ferrari scandal, in which AI software managed to (almost) perfectly impersonate the southern Italian accent of Ferrari’s CEO.?

As such, for our thirteenth edition of Ctrl + Alt + Comply, we’ll be diving into the risk of deepfake technology and how this is particularly threatening to biometric security methods such as voice authentication.?

Your Voice Is Vulnerable (And So Is Your Bank)

Voice authentication systems tend to rely on pitch, intonation, and pronunciation patterns in order to verify the ‘unique’ voiceprint of a user. This security method has gained particular traction in the banking sector, where it’s used as a way to conduct business efficiently over the phone.??

But with the rise of deepfake technology, voice authentication is becoming increasingly risky. In fact, voice biometric technology has gained the reputation for being one of the easiest biometrics to clone. This is because of the accessibility of voice data, meaning a short sample of someone’s voice could be enough to bypass many voice authentication solutions. According to James E. Lee, the only way to get around this is by utilizing one-time biometrics:

“To combat the combination of deepfakes and digital injection attacks, financial service institutions need a science-based, multifaceted approach that leverages the creation of a one-time biometric.”?

The Thin Line Between You and a Deepfake

Earlier this year, a finance worker from Hong Kong was scammed into transferring an alarmingly large sum of money – over $25 million – to fraudsters who were employing deepfake technology.?

And just last year, journalist Joseph Cox ‘hacked’ his own bank account using a free software, called ElevenLabs. By simply uploading five minutes of speech to the platform, he managed to use the AI-generated replica of his voice to break into his own bank account.

The rise of deepfakes has made it increasingly difficult to discern between real and fabricated identities, with serious legal and operational implications for businesses globally. In 2023 alone, deepfake fraud material supposedly skyrocketed by 3000%, showcasing not just the sophistication, but also accessibility of deepfake tech.?

The most recent ‘almost victim’ of this malicious technology is Ferrari, the case of which is broken down in the subsequent section.?

Ferrari's Near Miss

Just a few weeks back, an alarming example of the power of AI software hit the headlines, when a high-powered executive at Ferrari was almost scammed by a very sophisticated deepfake that was impersonating the voice of Ferrari’s CEO, Benedetto Vigna. Communicating from a different number than usual and bringing up a big acquisition, the Ferrari exec was suspicious from the get-go, but the eerily accurate southern Italian accent threw them for a loop, according to sources.?

Ultimately, one question managed to save the whole company from the deepfake scam; the executive asked the caller about a book recommendation that Vigna had recently made. That question ended the call abruptly when the impersonator couldn’t immediately answer the question (the correct answer was apparently Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World).

What’s alarming is that this Ferrari case is part of a much wider trend in which criminals are usurping deepfake technology to scam companies and individuals.?

Final Thoughts

Deepfake scams have robbed organizations around the world of millions, and it’s only bound to get worse. What’s specifically concerning is the use of voice authentication as a biometric security method, as the integrity of this is particularly susceptible to the threat of deepfake technology.?

To give an example of how unreliable voice biometrics are, I actually want to conclude with a relevant anecdote. Just a few weeks ago, I was joking with my roommate that I could get Siri on her iPhone to respond to me by impersonating her voice…after about eight(teen) attempts, it worked. Whilst not quite the same as breaking into a bank account or scamming a company of millions, it’s a clear indication that you don’t always need sophisticated technology to compromise security – be it on a large scale, like with Ferrari, or a small scale, like with my roommate (sorry again, Ana).?

Compliantly Yours,?

spektr


Like what you read? Click the subscribe button in the top right corner! And for those of you interested in achieving full-cycle compliance automation, shoot us a message or learn more about the spektr platform.?

要查看或添加评论,请登录

spektr的更多文章

社区洞察

其他会员也浏览了