Big Brain on Campus: Dr. Scott Zoldi Talks Responsible AI at MIT, UCSD and Duke
FICO’s chief analytics officer Dr. Scott Zoldi is not just an in-demand speaker on the hot topic of Responsible AI for industry events – his recent engagements include talks at Urban League, The American Physical Society, FinRegLab and FICO World – but also at top universities teaching the next generation of data science and business leaders. We caught up with FICO’s “big man on campus” after his recent college tour, at which he lectured and/or met with students at the Massachusetts Institute of Technology (MIT) Sloan School of Business, the University of California San Diego Jacobs School of Engineering and Duke University’s Statistics Department. Here’s why Scott’s excited about the upcoming class of data scientists and artificial intelligence (AI) professionals.
FICO: You’ve met with diverse audiences – MBA students at MIT, grad students studying machine learning at UC San Diego and statistics students at Duke. Were there any common themes in your lectures and conversations?
?Scott: I was very humble in what we, as humans and data scientists, can and can’t do with AI. We live in an era in which technology, AI in particular, has been ascribed to have magical powers. That’s not true; AI is not all-knowing. I believe it’s important that any practitioner be very honest about what we know and don’t know about this technology. Our role in the real world is to better understand it and improve it. I think it is refreshing for the students to hear plain, straight talk about AI from people who want to do it right.
FICO: How about MIT Sloan? How are MBA students thinking about AI’s role in business?
The MBA students I spoke with are potential future analytic practitioners. Many of them have worked as data scientists already and wanted to discuss the business applications of machine learning (ML) models. My approach was to provide a roadmap forward, toward Responsible AI, with some approaches and tools.
?I focused on the concepts of ethics and the safety of models because these students “know enough to be dangerous” and are mature enough to know it. They recognize where the danger lies in misusing AI. We talked about what companies are doing to have a structured, well-thought-out approach to AI, in a proof of work sense, to demonstrate they are abiding by a value system built around documented, corporate-mandated model development standards, ethics and a lack of bias.
The MIT students came with very different sets of real-world experiences, in terms of their previous work and capacities, and are really focused on what a Responsible AI future looks like. Many of them have been working as data scientists and have had exposure to AI but didn’t have a good understanding of best practices for Responsible AI. They were very interested in corporate model development standards, corporate governance standards, and AI audit tools such as blockchain model development governance, to codify and persist the development history to the chain.
I think the MIT students found our discussion around tools refreshing, including challenging ourselves to follow the IEEE 7000 guidance that essentially says, “build models that you can faithfully talk to and provide explanations around, with sufficient transparency.” Most of the students are swimming in the amorphous world of open source, where they are flooded with inference-explainable AI, which is not the same as transparency into how AI is driving decisions.
In that same audience there were several people who had more of an advocacy focus. They are very concerned about the social implications of technology, including its impact on issues pertaining to gender, race and sexual orientation. Although some of these students probably walked into the lecture hall, minds made up that AI is inherently biased, I believe I won over many of those folks during my talk.
?FICO: How about UCSD?
?The San Diego grad students I spoke with are doing research in machine learning technologies. I focused on unsupervised and semi-supervised analytic approaches, using examples of anti-money laundering (AML) detection models. I really challenged them to think about the applications and methodologies used to solve for detecting rare signals, such as money laundering activity. This is compared to what grad students traditionally do, which is not always as real world-focused or practical. They might build models that identify between a chihuahua and a blueberry muffin, but finding a criminal who’s funneling money in and out of Russia right now is a very practical skill that most grad students aren’t really thinking about. ?
?That conversation was interesting, exposing the students to a company that’s right in their backyard [the former HNC Software, where Scott worked prior to its acquisition by FICO in 2002. — Ed.]. HNC Software was, in fact, co-founded by a UCSD professor, Robert Hecht-Nielsen. My USCD talk was about innovation, to inspire these students to continue finding new ways to detect rare signals in real-life data — which is messy, problematic, and is not specified directly in nicely curated datasets. I focused on how the work they’re doing could have a real-life impact on solving problems that matter, such as the well-known challenge of money laundering, and how it fuels so much devastation in our world.
?FICO: That must’ve been quite a departure from a lecture on ML research! How about Duke?
It was fascinating to talk with students in Duke’s Statistics department. I have a Ph.D. in Physics [Dr. Zoldi received his Ph.D. degree from Duke, with a concentration in chaos theory. — Ed.]; there’s a lot of math in physics, and I feel math is often not a deeply understood part of data science. Speaking with statisticians is a great treat because statistics and data science are often viewed as two different fields trying to solve similar problems. In reality, the parallels are much closer than that.
I spent some time with Duke students and faculty talking about Ethical AI and the challenges around it, similar to the MIT talk. The Duke students were very strong in traditional statistical methods, and many were non-believers, too, in how machine learning models could possibly, properly represent the messiness of the real world. They are very concerned with ”How can I build a model that is perfect?” It was an interesting discussion that circles back to one of my favorite quotes from the British statistician George E.P. Box: “All models are wrong, but some are useful.”
For example, a bright student posed the concern, “I’m worried that if you have more simplified interpretable models, they may not be sufficient to describe the complete complexity of all behaviors.” I had to explain, “Who cares? A bank wants to minimize the level of false positives and detecting a certain proportion of the problem – fraud or money laundering – and you try find the best model that a) you can explain and b) that is fair. No one is looking for a perfect model.” I think some people’s brains exploded; they hadn’t understood the concept of ‘good enough.’
‘Good enough’ basically means that we don’t have the right math to build these super-complicated models, let alone interpret them. And frankly, bringing in my chaos theory work – most physical problems have a finite manifold of small dimensionality – contrived and observed complexity is typically nonsensical and irrelevant noise around the actual manifolds describes the fundamental physics of our lives.?
?Conceptually, an interpretable machine learning model is one that is my perception or my ability to describe the reality around me. The model would not be 100% accurate in every instance – no model ever is – but if I can constrain where it is accurate, and constrain the main drivers for how I explain reality, that’s what I can achieve model transparency, explainability, fairness and auditability. I didn’t say one of my favorite phrases, “Explainable first, predictive second,” at the lecture but that is essentially the theme.
FICO: We heard there was a special guest at your Duke lecture.
Yes, I’m very pleased that my physics advisor, Dr. Henry Greenside, came to my talk. He is a neurobiologist at Duke, interested in how brains work, and in applying the concepts of physics to see how brain signals move around. It’s a great analog for what I’m trying to illustrate: perfection and complexity versus transparency and explainability. I explained to the students that interpretable models aren’t necessarily less performant; they are more fair and less biased because they are tools we understand and can govern. I think they were able to wrap their heads around this approach, which is a conceptual departure from the absolute solutions they are trying so hard to find.?
FICO: Thank you, Scott, for sharing your thoughts and experiences with students around the country, and with us.
?Follow Scott on Social Media
?Follow Dr. Scott Zoldi?on LinkedIn and Twitter?@ScottZoldi?to keep up with his latest thoughts on Responsible AI, data science and more.
Dynamic Sales Leader | Maximizing Profits and Performance | Customer-Focused Sales | Sales Ops
2 年Scott, thanks for sharing!
Helping customers outmatch cybercriminals with a legion of ethical hackers who work for you to protect your attack surface continuously
2 年Scott, thanks for sharing!
Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October
2 年Scott, thanks for sharing!
MIT Sloan Fellow, MBA | Life Sciences, PhD | R&D | Scientific Portfolio Management | Healthcare Innovation | Precision Medicine | Data | AI | Strategy | Business Management
2 年Dr. Scott Zoldi, it was nice learning from your guest lecture at the MIT Sloan School of Management Analytics Proseminar class of Prof. Jordan Levine.