Navigating the Perils of Automated Identification: Safeguarding the Financial Industry in the Age of Fraud
Frank S. Jorga I Co-CEO and founder of WebID Solutions

Navigating the Perils of Automated Identification: Safeguarding the Financial Industry in the Age of Fraud

In recent days, the revelations from 404 Media's report on the now-defunct website "Onlyfakes" have sparked renewed concerns regarding the safety of digital identification procedures. According to the report, "Onlyfakes" provided a service on the clear web, offering to generate highly realistic photos of ID documents using data and photos uploaded by users. Remarkably, for a mere $15, the author of the article managed to create counterfeit ID photos that successfully circumvented the automated ID verification services of numerous prominent crypto brokers and neobanks in the EU.

While the PSD2 regulation in Europe mandates financial institutions to identify and verify customers during onboarding through the "know your customer" (KYC) process, the specific measures required vary from country to country. Despite the advent of automated tools that prompt users to submit photos of their identity documents and faces for biometric comparison, these tools often struggle to comprehensively authenticate ID documents.

National Differences

In many eastern and southern European countries, automated processes like these are in place. While they make it more convenient for the bank and end-customer due to speed, many analyses, like the one by 404 Media or the 2022 Chaos Computer Club report, have shown that many of these automated tools can be easily tricked using various methods.

This leads to a situation where countries like the UK, with weak KYC requirements, have experienced a surge in credit fraud. The UK authorities reported a rise of roughly 30% in ID document-based financial fraud in 2022 alone, resulting in over £100 million in losses.

Shifting the perspective to Germany, the biggest economy in Europe has traditionally had a strong focus on IT security, not allowing banks to merely onboard customers based on automated tools like the ones used in the UK. Instead, the so-called Video Identification has been established as the most prominent identification process, in which operators check the ID documents and their owners in a video call. Focusing on security features like holograms and biometric features, as well as psychological questions to ensure the person is not a victim of social engineering, the human-driven procedure has prevented a similar rise in fake identity-driven fraud.

Technology

As technology trends like artificial intelligence and especially neural networking are progressing at a rapidly increasing speed, fraud risks for automated identification processes are rising. Especially with the commercial availability of fraudulent tools, lax regulations will create the opportunity for previously technically unskilled fraudsters to engage in financial crime with almost no risk. Recent media reports have mainly focused on the risks of “Deepfake” technology, a technology that can alter the face of a person in videos, changing it to the face of another person. While the development of deepfake technology poses several risks, first and foremost by eroding trust in media, the risks differ in the realm of identification.

How a deepfake is created

While AI-driven identification solutions can outperform humans under perfect conditions, most AI models struggle under the pressure of reality. Onlyfakes, for example, successfully used fuzzy background pictures taken on carpets to throw off the AI, putting them at a significant disadvantage compared to the human brain. Most Deepfakes, for example, will be easily detected by people in a live environment if used correctly.

Deepfake technology

In Germany, specific countermeasures, such as requiring hand gestures in front of the face and ID document during identification, expose AI-generated artifacts to the operator that would otherwise go unnoticed. While AI may excel under ideal circumstances, human intuition remains indispensable in detecting anomalies and mitigating risks.

Deepfake detection techniques

Technology vs. Common Fraud Scenarios for Remote Onboarding

While the speed at which AI is emerging is creating new risks, most fraudsters are still using social engineering to achieve their goals:

In a typical scam, perpetrators advertise highly desirable items such as apartments or job opportunities. When individuals express interest, the scammers request basic personal information, subsequently exploiting this data to open bank accounts in the victims' names without their consent. Subsequently, they coerce their victims into verifying their identities through a digital tool. Unfortunately, many victims unwittingly comply, unaware that they are unwittingly authorizing the opening of a bank account instead of pursuing the advertised apartment or job.

Common fraud scenarios like this can only be prevented by asking during the interaction what the person identifies themselves for, making human interaction, at least for high-risk cases, very important.

Summary

While AI is continuously developing forward, it has not yet been the solution to every problem in the context of identification. For the near future, it is important to understand AI's risks and potentials; otherwise, trust in digital identities will erode in the coming years.

Yuriy Myakshynov

Senior Director Of Technology @ Sombra

7 个月

Frank, thanks for sharing!

回复

Absolutely, the intersection of technology and human intuition is crucial in navigating the digital age safely. ?? Benjamin Franklin once said, "An ounce of prevention is worth a pound of cure." It's a reminder that proactive measures, particularly in the integration of human oversight with technological advancements, are essential in maintaining the integrity of digital identities. For those passionate about pioneering secure and trustworthy systems, don't miss out on an opportunity to make history with us in the Guinness World Record of Tree Planting. Let's grow towards a safer and greener future together: https://bit.ly/TreeGuinnessWorldRecord ??? #DigitalTrust #HumanTouch #Innovation

Absolutely, the rise of deepfakes is indeed alarming ??. Albert Einstein once said, “The human spirit must prevail over technology.” This highlights the invaluable role of human vigilance alongside tech advancements in combating fraud. Let's champion the blend of technology and human insight for a safer digital world! ???? #DigitalTrust #HumanInsight

Manmeet Singh Bhatti

Founder Director @Advance Engineers | Zillion Telesoft | FarmFresh4You |Author | TEDx Speaker |Life Coach | Farmer

8 个月

Human interaction will always be essential in maintaining trust and preventing fraud in digital identities. ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了