What if the future of AI hinges not on tech’s loudest rebels, but on the quiet pragmatism of bankers?
On Friday, the American Bankers Association (ABA) submitted a thoughtful Comment Letter, authored by Ryan T. Miller, CIPP/US, in response to the Administration’s Request for Information on Artificial Intelligence (AI) development.
At a time when some tech sector voices advocate for a hands-off approach—no legislation, no regulation, no rules—the ABA takes a refreshingly pragmatic stance: effective management of AI risks is essential for American dominance of this transformative field.
The ABA’s letter (linked below) outlines a clear-eyed acceptance of the need for robust AI oversight in finance. They endorse the “three lines of defense” model for AI governance, alongside supervision, effective challenge, and third-party risk management.
They also underscore tackling bias and ensuring fair lending.
One standout point: the ABA urges Congress to pass federal legislation to preempt a patchwork of state-level AI rules. Without Congressional action, they warn, we risk fragmented regulation that could hinder innovation.
They advocate for an AI risk management framework that avoids the overly prescriptive approaches seen in Europe or a recently vetoed California bill in favor of one that favors flexibility, clarity and a focus on outcomes.
The ABA also shines a spotlight on non-bank AI providers selling models to financial institutions.
Banks can’t fully assess third-party algorithms, training data, or performance without cooperation—yet these vendors often operate outside the regulatory perimeter.
The ABA’s suggestion?
Bring them into the fold with targeted oversight, ensuring transparency and accountability across the ecosystem.
The ABA advocates for more modern bank supervision too. Field examiners, they argue, too often fixate on minutiae—like dissecting lines of code—rather than focusing on what matters: a model’s inputs, outputs, and real-world outcomes.The ABA also envisions regulators using AI themselves and asks for clarity on how agencies will integrate AI into examinations.
The ABA proposes voluntary strategies like model cards for validation and industry certifications to benchmark fairness, transparency, and explainability. They envision an approach to AI explainability that blends data governance, weighted decision-making, assurance testing, and continuous risk monitoring – all practical steps toward responsible AI deployment.
At its core, the ABA’s message is: Banks want to innovate with AI but “Castles cannot be built on quicksand; dominance can only result from order.”
https://lnkd.in/gWUnszTu