Phantoms, Ghosts, Undeads, and Frankensteins – A New Breed of AI Deepfake Threats are Coming to the Banking, Financial Services, and Insurance Sector
Image generated by MS Copilot DALL-E 3

Phantoms, Ghosts, Undeads, and Frankensteins – A New Breed of AI Deepfake Threats are Coming to the Banking, Financial Services, and Insurance Sector

  • As cybercriminals increasingly use sophisticated ChatGPT-like generative AI trained on the dark web to craft malicious malware, phishing emails, as well as realistic deepfakes, companies urgently need to leverage even more advanced generative AI to safeguard their data and information security. This is particularly concerning for companies in the BFSI sector where the use of generative AI identity theft is on the rise.
  • According to SlashNext, phishing emails have surged by 1,265% since ChatGPT was launched in November 2022. Darktrace researchers also reported a 135% increase in novel social engineering attacks using ChatGPT from January to February 2023. Moreover, a survey by Deep Instinct of over 650 cybersecurity experts revealed that three-quarters of them have witnessed a rise in generative AI-based attacks in the past year, with growing privacy concerns, undetectable phishing attacks, and an increase in volume and velocity of attacks due to generative AI.

?Imagine your CEO or CFO calls you and tells you to send money right away, followed by an email that confirms the necessary approvals. You would probably not think twice and transfer the fund, especially if you know their voices or see their faces and have email proof. But now, with generative AI that can easily make things up, we must be more careful and not trust everything we hear, see, or read, even from people we know. New generation of AI tools make it very easy for anyone to create realistic sound, image, video, and text impersonations.

According to the Hong Kong Computer Emergency Response Team (HKCERT), there is roughly 12% more security incidents in the first 9 months of 2023 compared with 2022, and the top two information security risks are phishing and AI attacks. Phishing attacks that use generative AI to create fake emails to impersonate real people or organizations to steal personal or financial information are becoming much harder to detect. Generative AI can also create malware or deepfake messages, voices, images, or videos, to scam people, spread rumours, or even blackmail them. As more companies deploy AI models, they also become vulnerable to malicious attacks. If an AI model is compromised, all services using that model will also be affected.

In the past, deepfake scams were limited to high-profile targets who could offer large payoffs, as scammers required advanced technical skills to create convincing deepfakes. For example, in 2019, the director of a UK energy firm transferred €220,000 to a Hungarian supplier's bank account after receiving a phone call from someone who claimed to be the CEO of the parent company in Germany. The caller’s voice was a deepfake created by a scammer, that mimicked the CEO's subtle German accent and vocal patterns. In 2020, a fraudster used AI deepfake technology to mimic the voice of a company director and tricked a bank manager in Hong Kong into transferring US$35 million for a fake acquisition deal. The bank manager did not suspect anything because he had dealt with the real director before and thought he recognized the voice.

However, with recent advancements in generative AI technology and the wide availability of deepfake software, anyone can become a victim of deepfake fraud. Generative AI has made deepfake much more realistic, and much harder to detect. Creating deepfakes is now much easier, even for someone with little computer knowledge. All you need is only a few seconds of someone's voice and their photo to make convincing deepfakes. Of course, the more samples you have, the better the quality.

Six people were arrested in Hong Kong in August 2023 for using AI deepfake to impersonate others and apply for loans and bank accounts online. They used real-time video selfies generated by AI to match the identities of their victims.[vi] The banks required these selfies as part of their verification process. This was the first deepfake-related arrest in Hong Kong, but I expect more AI deepfake frauds to target the banking, financial services, and insurance (BFSI) industries soon.

A recent survey of over a thousand companies worldwide in the financial services, technology, telecoms, and aviation sectors revealed that voice and video deepfake fraud had affected one-third of them last year. According to another worldwide study, the number of deepfakes used in scams in just the first three months of 2023 outstripped all of 2022 and more.

Deepfake technology poses a serious risk of identity theft and fraud for companies in the BFSI sector. Fraudsters can use AI deepfake to impersonate customers, employees, or executives and perform illicit transactions or access confidential information.

They can commit “phantom” or new account fraud where fake or stolen identities are used to open new accounts to accrue debt or launder money. They can also exploit the identities of deceased persons, known as “ghost” fraud, and access their bank accounts and online services, apply for loans, or claim benefits. Deepfakes are also used in “undead” claims where family member or fraudsters make insurance or other claims on behalf of deceased individuals or collect benefits, such as pension payouts or social security, convincing others that a person is still alive. Fraudsters may also create synthetic identities of people who do not exist by mixing fake, real, and stolen data. Criminals use these “Frankenstein” identities to apply for credit/debit cards or other transactions to build a credit score for the fake customers. Furthermore, fraudsters may create deepfakes to submit as falsified evidence for insurance claims. According to a recent study, identity theft affected over 40 million US adults in 2022 with a total of US$43 billion in losses.

Companies in the BFSI sector need to be aware of these new threats and frauds posed by generative AI and be vigilant in implementing robust security measures and monitoring systems to prevent and detect any malicious activities. They need to use multiple methods and channels to verify information sources and authenticity. They will need advanced analytics and machine learning to monitor transactions, communications, and networks for any unusual patterns or behaviors. They will also need new tools to detect new generation of deepfakes. Most importantly, companies need to raise awareness and provide training to new threats involving generative AI.

Existing industry regulations on know-your-customers (KYC) or identity management can help prevent many of the frauds that use deepfake technology. However, these regulations, guidelines, and recommendations need to be updated more frequently to keep up with the fast-changing and ever more advanced AI deepfake techniques.

Cybercriminals have developed their own ChatGPT-like generative AI tools that are trained on dark web data to assist them in creating malicious content. They can use these tools to generate phishing emails, malware, scam pages, and viruses that can deceive and harm unsuspecting victims. These tools demonstrate how generative AI can be exploited for nefarious purposes and pose a serious threat to individuals and businesses. Therefore, BFSI institutions may need to consider deploying special versions of generative AI tools that are trained specifically to protect information security to counter these malicious attacks.

Generative AI is one of the most impactful technologies in recent years. However, it also poses a serious threat to the BFSI sector, as it enables bad actors to create realistic and convincing deepfakes. Business leaders need to act quickly and cautiously to protect themselves and their customers from these new risks. They need to adopt new AI tools and procedures to detect and prevent this new generation of generative AI frauds from proliferating.

Related articles:


Vivian Yeo

Strategic Business Planning | Industry Engagement | Change Management Execution, PROSCI Accredited | Sales Enablement | Partner and Client Relationship Management | Project Management | DEI Advocate | Women@Cloud

9 个月

Excellent article. I have a good read and there's so much learnings there.

One of the key takeaways is the importance of collaboration. Only by working together can we effectively combat this evolving threat.

Michael Thomas Eisermann

?? 中国广告创新国际顾问 - 综合数字传播客座教授 - 140 多个创意奖项 ?????

9 个月

Beware of the phantom-like dangers lurking in the use of deepfake AI. Stay vigilant! ????

要查看或添加评论,请登录

社区洞察

其他会员也浏览了