Deepfake Media FinCEN Fraud Alert
Foodman CPAs and Advisors
Forensic accounting & litigation support * Complex International tax Compliance * Banking Compliance, FATCA/CRS/QI
On 11/13/24, FinCEN issued an alert aimed at assisting financial institutions in recognizing fraudulent schemes linked to the utilization of deepfake media generated by generative artificial intelligence (GenAI) tools. The alert for deepfake media outlines the various typologies related to these schemes, offers red flag indicators to aid in the detection and reporting of suspicious activities, and serves as a reminder to financial institutions regarding their obligations under the Bank Secrecy Act. FinCEN has noted a rise in the reporting of suspicious activities by financial institutions, particularly concerning the alleged utilization of deepfake technology. This includes the fraudulent use of identity documents aimed at bypassing established identity verification and authentication processes. The misuse of generative artificial intelligence tools is a contributing factor to the escalation of cybercrime and fraud, which are central to FinCEN’s National Priorities in Anti-Money Laundering and Countering the Financing of Terrorism. This alert forms part of the U.S. Department of the Treasury’s comprehensive initiative to equip financial institutions with insights regarding the potential benefits and challenges associated with the implementation of artificial intelligence. Accordingly, financial institutions are encouraged to collaborate with corporate governance professionals that are experts in fraudulent GenAI tools.
Deepfake Media Publicly Available GenAI Tools
The FinCEN alert states that the arrival of Generative AI (GenAI) tools has significantly diminished the resources necessary for the creation of high-quality synthetic content. This encompasses media that is either entirely generated through digital or artificial processes or media that has been altered or manipulated using various technologies, whether they are analog or digital. In numerous instances, GenAI is now capable of producing synthetic content that is indistinguishable from original or human-generated material. Content generated by GenAI that exhibits a high degree of realism is often termed “deepfake” content. Deepfakes can fabricate authentic events, such as an individual appearing to perform or articulate actions they did not genuinely undertake.
FinCEN’s analysis of BSA data indicates that criminals have used GenAI to:
Financial institutions frequently identify GenAI and synthetic content within identity documents by performing thorough re-evaluations of the documentation submitted during the account opening process. That said, FinCEN states that there are three indicators that warrant additional scrutiny
Financial institutions have implemented enhanced due diligence measures to identify deepfake identity documents beyond the initial account opening process. While the following indicators do not definitively indicate suspicious activity, they may prompt further investigation:?
领英推荐
FinCEN has identified nine red flag deepfake media indicators to help financial institutions detect, prevent, and report potential suspicious activity related to the use of GenAI tools for illicit purposes. Following are the extracts:
Know this
Deepfake Media also engineers attacks aimed at customers and employees of financial institutions, facilitating various scams and fraudulent activities. These include business email compromise (BEC) schemes, spear phishing, elder financial exploitation, romance scams, and virtual currency investment fraud.
Has your financial institution been a victim of deepfake media fraud?
Does your financial institution have processes in place to prevent or reduce the risk of deepfake identity documents?
Is your financial institution reporting suspicious activity related to the use of GenAI tools for illicit purposes?
Who is your corporate governance advisor? ?