The Data is in on Deepfakes, oh and AI can now Access Your Bank Account…
Me right now...

The Data is in on Deepfakes, oh and AI can now Access Your Bank Account…

Yesterday I was on a call with the founder of an AI voice agent startup and he said something that absolutely blew my mind. According to this founder, users of his platform could clone their voices and have their AI personal assistants make calls and complete tasks on their behalf while speaking in their voice. Users could simply explain their objective to his voice agents, provide answers to follow up questions, and then optionally listen in real time as the AI agent makes calls and completes the desired tasks.?

When I asked what types of tasks his users liked to use his platform for he explained how his customers used their voice clones and AI agents to negotiate with vendors on their behalf, make appointments, make sales outbound calls, and talk with their banks.

Excuse me?

TALK WITH THEIR BANKS?

I was shocked. This couldn’t be right. I immediately asked follow up questions to make sure I had heard him correctly. Sure enough, users would provide THEIR BANK LOGIN INFO to the voice agent, and have it contact their bank on their behalf, speak in their voice, access their account, and talk with representatives. Users would be alerted of any 2FA codes that were sent while the agent worked on their behalf so that way the voice agent could complete its objective.

I was floored. I don’t know what shocked me more, that users were willing to provide their bank login information to an AI voice agent and give it the autonomy to act on their behalf inside of their financial accounts, or the fact that banks allowed an AI voice clone to access user accounts and take actions.

The future is now, and it’s a nightmare for security professionals.?

Outside of the obvious risks associated with 3rd party AI agents accessing and acting within financial accounts, over the last month or so, the data has started to come in on how prevalent deepfakes attacks are becoming. Unfortunately, it’s exactly what I expected. Let’s take a look.

According to a recent McAfee report , 1 in 4 adults have already been impacted by deepfake attacks. For those targeted with AI voice scams, 77% lost money. This is unsurprising considering the report also found that 53% of adults share their voice online or on social media at least once a week, with 49% doing so up to 10 times per week. (Not to say I told you so, but I wrote previously about how this was likely going to take place over a month ago in this blog here .)

The recent 2024 ISMS State of Information Security Report found that deepfakes now rank as the second most common cybersecurity incident type for US businesses over the last 12 months with over 30% of US businesses experiencing a deepfake incident. Oof.

A Sumsub post on LinkedIn last week also highlighted the rapid growth of deepfake attacks. According to their research, deepfake attacks in the US grew 303% between Q1 2023, and Q1 2024, with a 533% increase in attacks targeting fintech in particular. However, things were even worse in other countries. In the same timeframe, deepfake attacks grew 2800% in China, 1625% in Korea, 1550% in Indonesia, 1533% in Turkey, 1000% in Hong Kong, 822% in Brazil, and 500% in both South Africa and Mexico.

As much as we might be reluctant to admit it, deepfakes are no longer the stuff of science fiction, or a niche threat to be addressed in the future. They are here, and they are effective. 77% effective, per McAfee above, with exponential growth in usage. Even if businesses have other security controls in place, that number should be concerning enough to prompt immediate action.

What concerns me most is that everyday people have no idea how good deepfakes have become, and as a result are extremely vulnerable.

They have the sense to know it’s a problem, but it seems uncommon or unlikely to ever happen to them. It's only when they see voice clones firsthand, meet someone who has fallen for a deepfake scam, or see the data that they truly grasp how serious things are. Instead of a Nigerian prince needing money , scammers are now targeting people with phone calls from desperate relatives in dire situations needing urgent money to save them. You can hardly blame people for falling for these scams when they are unaware of how good deepfakes are now and hear a familiar voice in distress.

For businesses, the data above should be equally concerning.

30% of businesses being targeted with a deepfake attack is no small percentage, and paired with the exponential growth rates seen in the ISMS report above, pretty soon all businesses will experience these attacks on a regular basis. While voice and video communication channels have not historically been primary attack vectors, this is quickly changing. Trusted faces and voices are extremely disarming, and bad actors are realizing how effective their social engineering, voice phishing, and fraud can be when leveraging this psychological vulnerability.

The good news is that not all is lost.?

Many of the businesses creating voice clones and AI voice agents are committed to responsible usage, and are working diligently to ensure their platforms aren’t abused by bad actors. Even the founder I highlighted earlier is working hard to ensure his voice agents are used responsibly and are secure from breaches and misuse. However, while the proactive work of generative AI platforms is important, both individuals and businesses also need to take steps to ensure that they are protected.

Below are some recommendations of steps that can be taken to reduce risk from a previous blog I wrote titled Deepfakes and Stolen Voices: How to Navigate a new era of Identity Theft .

Proactive steps:

  1. Continued Education: Now that you are aware of the threats, continue to stay updated on the latest developments in generative AI and continue to read up on how you can protect yourself. Take the time to educate your loved ones as well as they might not be aware of the problem, and knowing there is a threat is the first step towards defending against it.
  2. Audit Exposure: Take time to conduct a thorough review of everywhere your voice appears digitally. Do the same with your kids, parents, grandparents, and other family members. The list I provided above can be used as a starting point.
  3. Limit Accessibility: Once you have taken inventory, take action to limit the quantity and digital accessibility of your voice. Remove your personal voicemail recordings, update your social media privacy settings to the maximum levels of security and privacy (or even better, remove videos and audio), take down old videos and recordings that are no longer relevant.

Reactive steps:

  1. Think Critically: For everything you see and hear online or on a call, think about what is being said and asked and question the identity and motives of the speaker: Why is it being said? Who is saying it? Am I sure it’s really that person? Do they have an agenda? Is this the proper channel of communication for this information? Is this following the relevant processes? Don’t just accept information at face value. Adopt a mindset of always being skeptical of what you are hearing, especially when information or money is being requested.
  2. Check Speaker Identity: For both individuals and businesses, I recommend implementing a second method of verifying speaker identity on calls. For individuals and families this might be security questions or passphrases. For a business this might be process updates and training around confirming speaker identity, training around checking the email that is sending meeting invitations, and defining what information can be exchanged over what channels.
  3. Adopt Tools to Secure Calls: I’m obviously heavily biased here, but in my eyes we have crossed the rubicon when it comes to being able to perceive real audio from deepfake audio in the wild, and relying on people to think critically about what is being said and question the identity of speakers at all times isn’t realistic, especially in emotionally charged, high stress scenarios. Given this, we need to adopt tools that can verify the source of the audio being generated, analyze the content of what is being said and alert us to suspicious asks, and identify, and alert us of, deepfaked speech in realtime. My company DeepTrust can help with this.
  4. Adopt Tools to Verify Content: Similar to securing calls, we now need accessible tools that can analyze the videos, recordings, and other media that we are consuming to identify misinformation and flag AI generated content. Long term, responsibility for content protection should likely reside with the platforms that distribute content (social media, search engines, news, phone providers, etc.) but in the meantime, there are still tools we can use to validate anything we have questions about.


That’s all for this week. Stay safe out there and take care of one another.


Thanks for reading!

I'm a co-founder at DeepTrust where we help security teams defend against advanced social engineering, fraud, and phishing by providing next generation voice security built for deepfake threats. Seamlessly integrating with VoIP services like Google Meet, Zoom, Microsoft Teams and others, DeepTrust authenticates voices, verifies devices, and alerts both users and security teams of suspicious activity.

If you’re interested in learning more, or have additional questions about deepfakes, I’d love to chat. Feel free to shoot me an email at [email protected] .

Ready to get started? Sign up here !

Finally, if you enjoyed this blog, I invite you to follow my newsletter, Noah’s Ark on LinkedIn.

Conner Ganjavi

Medical Student at Keck Medicine of USC

5 个月

This is terrifying. We have to stop bad actors from using these AI models. It should be the number 1 priority rather than continuing to scale without checks in place.

Simon Pulman

Entertainment Lawyer Focused on Complex Rights Deals, Film and TV Finance and Distribution, and Franchise Development; Partner and Media+Entertainment Co-Chair at Pryor Cashman

5 个月

Yeah this is not good at all.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了