AI and compliance in the real world: CCOs meet in NYC
Arguably, there are more financial institutions located in the New York metropolitan area than anywhere else on the planet, so it was only fitting for a conference on AI, Technology Innovation & Compliance to be held in NYC – at the storied Princeton Club, no less. A few weeks ago I had the pleasure of speaking at this one-day conference, and found the attendees’ receptivity to artificial intelligence (AI), and creativity in applying it, to be inspiring and energizing. Here’s what I learned.
CCOs embrace new tech
As you might expect, the Chief Compliance Officers (CCOs) attending the AI conference were extremely interested in applying artificial intelligence to their business, whether in the form of machine learning models, natural language processing or robotic process automation – or all three. These CCOs already had a good understanding of AI in the context of compliance, knowing that:
- Working the sets of rules will not find “unknown unknowns”
- They should take a risk-based approach in determining where and how to divert resources to AI-based methods in order to find the big breakthroughs.
All understood the importance of data, and how getting the data you need to provide to the AI system is job number one. Otherwise, it’s “garbage in, garbage out.” I also discussed how to provide governance around the single source of data, the importance of regular updating, and how to ensure permissible use and quality.
Explainable AI and GDPR
Explainable AI (XAI) is a big topic of interest to me, and among the CCOs at the conference, there was an appreciation that AI needs to be explainable, particularly in the context of compliance with GDPR. The audience also recognized that their organizations need to layer in the right governance processes around model development, deployment, and monitoring––key steps in the journey toward XAI. I reviewed the current state of art of Explainable AI methods, and where their road leads to getting AI that is more grey-boxed.
Ethics, safety and other hot topics
In pretty much every AI conversation I have, ethics are the subject of lively discussion. The New York AI conference was no exception. The panel members and I talked about how any given AI system is not inherently ‘ethical’; it learns from the inputs it’s given. The modelers who build the AI system need to not pass sensitive data fields, and those same modelers need to examine if inadvertent biases are derived from the inputs in the training of the machine learning model.
Here, I was glad to be able to share some of the organizational learning FICO has accumulated over decades of work in developing analytic models for the FICO? Score, our fraud, anti-money laundering (AML) products and many others.
AI safety was another hot topic. I shared that although models will make mistakes and there needs to be a risk-based approach, machines are often better than human decision-making, such as autopilots on airplanes. Humans need to be there to step in if something is changing, to the degree that the AI system may not make an optimal decision. This could arise as a change in environment or data character.
In the end, AI systems will work with the data it has trained, and is trained to find patterns in it, but the model itself is not necessarily curious; the model is still constrained by the algorithm development, data posed in the problem, and the data it trains on.
AI tech notes
Finally, the panel and I talked about AI software and development practices, including the risks of open source software and open source development platforms. I indicated that I am not a fan of open source, as it often leads to scientists using algorithms incorrectly, or relying on someone else’s implementation. Building an AI implementation from scratch, or from an open source development platform, gives data scientists more hands-on control over the quality of the algorithms, assumptions, and ultimately the AI model’s success in use.
I am honored to have been invited to participate in Compliance Week’s AI Innovation in Compliance conference. Catch me at my upcoming speaking events in the next month: The University of Edinburgh Credit Scoring and Credit Control XV Conference on August 30-September 1, and the Naval Air Systems Command Data Challenge Summit.
In between speaking gigs I’m leading FICO’s 100-strong analytics and AI development team, and commenting on Twitter @ScottZoldi. Follow me, thanks!
Founder/CEO @ BrightCheck | Experienced executive in Trust & Safety
7 年Great read Scott. Exciting to see the blend of risk, Fraud, regulation, and compliance start to come together through technology to combat a common enemy. Fraudsters