Generative AI and the Intensified Identity Fraud

Generative AI and the Intensified Identity Fraud

This February, OpenAI released its first text-to-video AI model, Sora, attracting huge public attention as a revolutionary advancement in AI and video production. While the capability of generating sophisticated movements and scenes opens up new opportunities across industries, it also presents unprecedented challenges to businesses, particularly in the form of intensified deepfake threats. The rapid development of AI has significantly complicated the verification of user identities, providing fraudsters with ample opportunities to exploit vulnerabilities.

“The Hong Kong police recently revealed a major case of AI fraud where a finance employee of a multinational corporation was conned by a scammer using AI face-swapping technology to impersonate the company's CFO. Despite initial suspicion, the employee was reassured when other colleagues joined the video call, leading to the transfer of 200 million Hong Kong dollars to an unknown account.”

This case exemplifies the significant development in the misuse of AI within the telecommunications fraud domain. In this article, we will reconstruct the process of AI-related identity fraud, as well as preventative measures to tackle these challenges.

The Initial Attack Process

The initial fraudulent attack process can be broken down into the following steps: gaining the victim's trust through social engineering tactics, obtaining control of their mobile devices, stealing accounts, transferring funds, or consuming credit limits. After achieving the user identity, fraudsters will initiate impersonation attacks such as AI face-swapping, presentation or injection attacks that bypass liveness detection, and transfer funds/apply for loans/consume credit limits.

?? In the context of information security, social engineering is?the tactic of manipulating, influencing, or deceiving a victim in order to gain control over a computer system or to steal personal and financial information. It uses psychological manipulation to trick users into making security mistakes or giving away sensitive information.

Preparation

  • Building phishing websites impersonating government agencies such as public security ministry, finance department, and taxation office.
  • Designing UIs resembling Google Play, TestFlight, and others to lure users into downloading malware.

Gaining Trust

The goal of this step is to have malware installed on the victim's mobile device to gain control access.

  • Initiating contact via phone calls/text messages, directing victims to communicate through instant messaging applications.
  • Notifying victims to install the 'software' themselves or for their family members, purportedly to claim digital pensions, tax refunds, or apply for low-interest loans.
  • Persuading victims to download malware via platforms that closely mimic Google Play and TestFlight, while also manipulating Apple devices through Mobile Device Management (MDM).

Controlling Device

These actions render typical security measures such as changing device, IP detection, and two-factor authentication ineffective.

  • Remotely controlling the host mobile device.
  • Using the infected device as a proxy for traffic.
  • Installing a SMS filter plugin to redirect all messages to an external server.
  • Retrieving images from the victim's gallery and capturing facial recognition data.

Stealing Account

This critical step retrieves almost the entire set of personal information from the victim's device.

  • Abusing the AccessibilityService to read the user interface (UI) and keystrokes, so as to retrieve passwords.
  • Directing users to phishing websites to obtain personal information: name, email, bank account, phone number, address, ID photos, facial recognition data, and so on.

Profit

At this stage, the fraudster has already obtained full control over the device. The victim cannot receive any relevant notifications since messages are intercepted.

  • Launching mass attacks and repeatedly using victim’s information to seize assets.

The emergence of these tactics can be traced back to the renewal of the Google Store privacy policy at the end of 2023, where permissions related to location retrieval, app listings, SMS/call logs, and cameras were tightened. Furthermore, it expands the classification of apps with similar permissions and malicious code as junk software.

Against this backdrop, cross-platform impersonation attacks are becoming increasingly popular. Typically, fraudsters have a deep understanding of the functionalities and audit requirements within specific domains. They will probe into the security measures across similar platforms and initiate mass attacks afterward.

Currently, the TrustDecision intelligence team has monitored such attacks plaguing countries such as Thailand, the Philippines, Vietnam, Indonesia, Peru, etc.

The Derived Attack Process - Using AIGC

  • Investigating the app category and common SDKs.
  • Installing the target app.
  • Registering accounts on the target platforms.
  • Using the stolen identity to acquire more data.
  • Using AI to generate videos: face swapping, video generation for presentation or injection attacks.
  • Completing identity verification.
  • Withdrawing money or consuming credit limits.

In this scenario, identity verification becomes tricky as all the 'applicants' submit authentic user information, posing a challenge for conventional KYC tools to detect identity fraud. Moreover, fraudsters are well-informed about the creditworthiness of the data owner, ensuring that it meets the platform's risk control requirements.

?? Currently, most facial recognition solutions utilize Presentation Attack Detection (PAD) to determine whether the identity is authentic.

?? A presentation attack is when an attacker uses fake or simulated biometric data, such as masks or photos, to deceive a biometric authentication system, like facial recognition. PAD aims to distinguish between live human faces and such imitations. It’s primarily used to defend against presentation attacks. However, more and more fraudsters are now turning to deepfake to carry out injection attacks, which bypass physical cameras and use tools such as virtual cameras to directly input images into the system's data flow.


Cases

Considering the availability of data, the cost of attack, and the complexity of platform encryption, fraudsters may initiate various forms of attacks, including but not limited to presenting manipulated photos, head models, printed pictures, screen filming, injection attacks, etc.

Case 1

Approach: Photoshop to replace the portrait of the ID card, film the screen, and hold a 3D head model to pass liveness detection.

Data Characteristics: Similar/near-identical demographic data aggregated, highly identical facial feature aggregated

Risk Label: photoshop, abnormal image edge, fake face

Target: To detect the baseline of the platforms’ risk control capabilities


Case 2

Approach: Change the name of the ID, use printed materials or film the screen

Data Characteristics: Similar/near-identical demographic data aggregated, highly identical facial feature aggregated

Risk Label: photoshop, reflection, moire effect

Target: To fake the identity, bypass the liveness detection and subsequent portrait comparisons


Case 3

Approach: Mass produce videos after figuring out the platform’s liveness detection algorithm

Data Characteristics: Extremely high liveness detection pass rate, highly similar video background and applicant’s apparel

Risk Label: AIGC, injection attack

Target: To fake the identity, bypass the liveness detection and subsequent portrait comparisons

The critical question arise..

How to prepare for such attacks?

The above risks stem from a combination of technologies and tactics including social engineering, malware, remote control, AIGC, etc. To address them, platforms must enhance user education in terms of vigilance against fraud, strengthen app security with firewalls and malware detection and removal tools, and develop effective monitoring mechanisms. Additionally, they should improve strategies for dealing with malicious AIGC applications and continuously update anti-fraud algorithms to detect and respond to risky behaviors promptly.

In terms of implementation, it Involves business logic design, tool and technique reconstruction, monitoring and analysis of identity verification process and results, verification behavior analysis, CV model counteraction, empowering decision-making AI models, and supplementing offline retrieval, etc.

TrustDecision suggests:

  • Establish a comprehensive risk profile database including the user’s device, IP, account information, etc.
  • Conduct environmental scanning to ensure operational and network security. A susceptible environment is a hotbed for injection attacks.
  • Detect data aggregation anomaly. For example, the abnormal aggregation of device information, facial features, account information, and demographic data.
  • Verify the license. This serves to protect key assets, such as validation results, against replay attacks and malicious API calls.
  • Detect the submitted identity verification materials. Verify facial features to prevent anomalies and deepfakes.
  • Set up alerts for anomaly monitoring for corresponding sites and countries.
  • Educate your users: do not click on suspicious links, as mobile malware is often spread through malicious links in emails, text messages, and social media posts; inform users to only download applications from official platforms such as Google Play Store and Apple App Store; when installing new applications, carefully review the requested permissions, and remain highly vigilant when applications request accessibility services; publicize the platform's official phone number to prevent attackers from impersonating platform customer service; remind users to manually check their SMS inbox for verification messages from unfamiliar platforms.

About TrustDecision

To effectively mitigate the risks posed by AI-driven deepfake attacks, it's crucial to introduce effective and robust identity verification technologies. This includes implementing preventative measures in terms of ATO and other fraudulent applications.

TrustDecision offers comprehensive anti application fraud solutions by integrating endpoint risk recognition capabilities, liveness detection algorithms, and image anomaly detection capabilities. Our solutions suite, including KYC++ and Application Fraud Detection, is designed to combat identity fraud risks derived from advanced generative AI techniques, such as presentation attacks and injection attacks. By leveraging these innovative tools, businesses can mitigate the risk of intensified fraud losses, safeguarding their operations and assets.

Follow us for insightful discussions on emerging trends in risk management strategies and technolgy to safeguard your business against fraud!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了