Tech's role in 2024 for banking compliance
2023 bore witness to the heightened stakes in banking compliance, with the industry facing a daunting $6.6bn in penalties for lapses in AML, KYC, and related regulatory domains.
Research highlight
Almost 9 out of 10 financial institutions are content trading off AI explainability to improve efficiency
The State of Financial Crime 2024 is based on a survey of 600 C-suite and senior compliance decision-makers across the US, Canada, UK, France, Germany, Netherlands, Singapore, Hong Kong, and Australia. All respondents currently work in financial services and fintech organizations, with 50+ employees and total assets worth $5 billion+.
ComplyAdvantage’s survey reveals that a significant majority of respondents, 89%, are willing to compromise AI explainability for greater efficiency, indicating a prioritization of efficiency over transparency in AI systems. Additionally, 68% claim to understand how regulators plan to regulate AI, demonstrating a level of awareness and engagement with regulatory initiatives in this field. Moreover, 66% agree that AI poses a cybersecurity threat, reflecting widespread recognition of the risks associated with AI technology. However, only 59% feel well-prepared to meet proposed AI legislation, suggesting a potential gap in readiness to adapt to regulatory changes. The survey also highlights that over 50% express concern about explaining AI-based financial crime outcomes, highlighting apprehension regarding the transparency of AI algorithms in this specific context.
The rise of AI presents a dual role in the realm of financial crime, acting both as a tool for criminals and a solution for combating illicit activities. Criminals leverage AI for various nefarious activities including fraud, cyber attacks, and accessing the financial system. AI's capabilities extend to inciting terror attacks, generating deepfakes for extortion, corporate espionage, and dissemination of illegal materials. Criminals exploit AI-enabled methods such as data poisoning and forgery, fostering a new model termed "Crime as a Service." Concerns arise regarding potential societal disruptions caused by AI, prompting international efforts, such as the UK's AI Safety Summit, to address safety and security implications.
AI also finds increasing application in legitimate solutions like customer onboarding, transaction monitoring, and regulatory compliance. AI-based systems offer efficiencies by reducing false positives and enhancing risk detection. The global AI market is expected to grow substantially, reflecting the widespread adoption of AI technologies. However, the concept of explainability becomes crucial, necessitating clear understanding and transparency in AI decision-making processes to meet regulatory standards.
Cybercrime, fuelled by privacy-enhancing technologies and expanding data volumes, poses significant threats globally. Criminals exploit weak IT protocols, fake investment websites, and social engineering scams, contributing to staggering losses projected to reach $9.5 trillion in 2024. Synthetic identity fraud, particularly online, remains a formidable challenge due to its detection complexities. Despite efforts to combat fraud, instances of payment fraud persist at historically high levels.
The rise of digital banking and digital assets introduces new avenues for criminal exploitation. Digital assets, especially those with privacy features, remain vulnerable to misuse in the absence of robust regulations. The emergence of the metaverse presents additional challenges, with potential risks to privacy and security. Law enforcement observes increased criminal activity in immersive environments, highlighting the urgency for enhanced cybersecurity measures across industries.
领英推荐
RegTech news
Regulation news
RegTech Deal news
CyberTech news
Other RegTech news