Artificial Intelligence in Banking: Hidden in Plain Sight
DALL-E

Artificial Intelligence in Banking: Hidden in Plain Sight

Artificial intelligence (AI) seems to be everywhere. It’s often embedded into everyday banking technologies and processes, such as customer service transcription, marketing tools, credit decision-making, cybersecurity, and fraud prevention. Yet, AI's presence is not always recognized or understood by everyone using it. Some tools, like chatbots, fall into a gray area—sometimes they're pre-programmed, and other times they adapt to user interactions in more dynamic ways.

The rapid rise of generative AI, especially with tools like ChatGPT or Google's Gemini, has made AI more accessible than ever. Now, anyone with an internet connection can leverage this technology to streamline tasks like report writing and data analysis. Banks need to be acutely aware of how AI, especially these "hidden" aspects, is integrated into their operations.

This is crucial as regulatory bodies tighten their grip on AI-related risks. For example, Colorado’s Consumer Protections for Artificial Intelligence in the U.S. and the EU’s Artificial Intelligence Act emphasize the importance of defining and governing AI usage. Banks that fail to align with these regulations may struggle to remain compliant, potentially putting themselves at risk of penalties or other legal repercussions.

Beyond compliance, maintaining customer trust through responsible AI use is paramount. As banks deploy AI, they need to implement guardrails for compliance and risk governance to avoid developing solutions that infringe on privacy or harm users.

Understanding AI in Banking

AI in banking isn't a new concept. As far back as the 1980s, financial institutions used 'expert systems' to help financial planners create tailored plans for clients. These early forms of AI were designed to emulate human decision-making, laying the groundwork for the advanced systems we see today.

However, defining AI is still challenging. AI is sometimes used interchangeably with terms like machine learning or analytics. According to the October 2023 White House Executive Order on AI, AI is a machine-based system that can make predictions, recommendations, or decisions based on human-defined objectives. However, this definition may not capture the full spectrum of what AI can do, leaving room for interpretation.

A U.S. Treasury report on AI and cybersecurity underscores this ambiguity. The report notes that even experts don’t agree on a universal definition of AI, which complicates banks' ability to assess and manage AI-related risks. This lack of clarity also makes it harder to communicate with regulators and vendors, increasing the complexity of compliance.

The Rise of Generative AI

Generative AI, like ChatGPT, has garnered significant attention, but it only represents a small fraction of the AI landscape in banking. Most banks rely on traditional AI models, such as neural networks and stochastic gradient boosted trees , to predict future outcomes based on historical data. These models, which use machine learning, self-learn relationships without human programming.

Yet, as banks increasingly adopt AI, many struggle to explain the workings of these models to a degree that satisfies regulatory scrutiny. This is a growing concern, especially as the line between traditional AI and generative AI blurs in public discourse.

Finding Clarity: Fair and Ethical AI Usage

Banks must ensure that the AI models they use are fair, ethical, and compliant. To start, institutions should inventory their AI use across business lines and develop standards to govern model development. These standards should also outline when AI models become harmful or obsolete and should be phased out.

While this sounds simple, the execution is often complex. For instance, Bankwell Bank in New Canaan, Connecticut, is exploring AI and generative AI for small business lending, sales, and marketing. However, even though AI is discussed at their town halls, they haven't formalized a comprehensive guidebook on its definitions or use.

Understanding AI's role extends beyond banking operations to third-party services. For example, cybersecurity and fraud prevention often rely heavily on AI. Banks should ask vendors for data flow diagrams to understand how their data is being processed and protected. The recent CrowdStrike outage reminded the industry of the importance of grasping vulnerabilities in third and fourth-party vendor relationships.

The Role of Governance in AI Adoption

As AI adoption accelerates, banks must stay informed about their regulatory landscape. The government's increasing focus on AI underscores the importance of transparent, responsible usage.

Bankers should have a deep understanding of AI trends and developments. While the size of an institution doesn't always reflect its sophistication with AI, smaller organizations are making strides. I recently spoke with a mid-sized credit union that established an AI center of excellence and is already experimenting with code acceleration through generative AI. They're also in discussions with their risk and regulatory teams to create initial guardrails.

AI has woven itself into the fabric of banking, but its role is not always visible. Banks must ensure they understand where AI is being used, how it aligns with emerging regulations, and how it affects their operations. Clear governance and ethical frameworks are essential to navigating this evolving landscape and maintaining customer trust.

neil torino

Lead Brand Ambassador/Head of Resume Screening/Business Development Representative at BRUNS-PAK Data Center Solutions

2 个月

Exciting times for all!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了