Ethical AI, Monetizing False Negatives and Growing Total Addressable Market
Bill Schmarzo
Dean of Big Data, CDO Chief AI Officer Whisperer, recognized global innovator, educator, and practitioner in Big Data, Data Science, & Design Thinking
What if I told you that companies that don’t embrace Ethical AI are leaving significant amounts of “Money on the Table”; that they are not only missing out on potentially profitable customers, but that over time they are eroding their Total Addressable Market (TAM)? Do I have your attention now?
After I published the blog “The Ethical AI Application Pyramid”, a question from Karrie Sullivan coupled with a mentoring session with the startup unfog.ai team (Monica and Chloe) caused me to reframe the “Learning from False Negatives” message:
“If your AI model doesn't take into consideration the ultimate outcomes of the AI model's False Negatives, then confirmation bias in the AI model could set in and eventually the company's Total Addressable Market (TAM) could shrink to a point where the business might no longer be viable.”
Yea, not only is Ethical AI the right thing to do from a cultural and society perspective, but there are direct bottom-line financial ramifications if your AI models are not learning and adapting from the AI model’s False Negatives. And while there are all sorts of ethical reasons for organizations to embrace Ethical AI, it’s really hard to use guilt and a clear conscious to motivate companies do to the right thing. But make a clear ROI-driven financial case for doing the “ethical thang” and companies are all ready to join sainthood!
So, let me explain how we can convert the “Ethical AI” conversation into a “Profitable AI” land grab, and the starting point for that conversation is understanding how to identify, learn and monetize the AI model’s False Negatives.
Turning Ethical AI into Profitable AI
In the blog “The Ethical AI Application Pyramid”, I stated:
“In order to create AI models that can overcome model bias, organizations must address the False Negatives measurement challenge. Organizations must be able to 1) track and measure False Negatives in order to 2) facilitate the continuously-learning and adapting AI models that mitigates AI model biases.”
Figure 1: Ethical AI Application Pyramid
In order to create AI models that can overcome the confirmation bias that is found in most AI models today, we must instrument our AI models to learn from the model’s False Negatives; that is, to learn from the people we didn't hire or didn't grant a loan or didn't admit to college.
AI Model Confirmation Bias and the Shrinking Total Addressable Market
AI model confirmation bias is the tendency for an AI model to identify, interpret, and present recommendations in a way that confirms or supports the AI model’s preexisting assumptions. AI model confirmation bias feeds upon itself, creating an echo chamber effect with respect to the biased data that continuously feeds the AI models. As a result, the AI model continues to target the same customers and the same activities thereby continuously reinforcing preexisting AI model biases. Your AI models aren’t reaching out to new customers or recommending new activities that differ from the ones that your AI model already knows about. And that means that over time, your Total Addressable Market (TAM) is shrinking (see Figure 2).
Figure 2: AI Model Confirmation Bias and the Shrinking Total Addressable Market (TAM)
While we can try to rely upon social norms and regulations to try to mandate organizations to build AI systems that address confirmation bias and learn from their False Negatives, maybe a more persuasive way to enact change is to appeal to the most powerful business incentive…money! Identifying and learning from an AI model’s False Negatives is one way to overcome AI model confirmation bias and ensure that your AI model is expanding your TAM by capturing new customers who may not look like your existing customers. Yes, the confirmation bias built into today’s AI systems is causing organizations to miss new monetization opportunities – new markets, new customers, new pricing, new audiences, and new channels.
To understand the ramifications of AI model confirmation bias on your organization’s Total Addressable Market, we need to jump into our wayback machine and revisit a concept from 2004 – monetizing the Long Tail.
Monetizing the Long Tail
The concept of the Long Tail was introduced by Chris Anderson in 2004 in his Wired Magazine article “The Long Tail”. The concept behind the Long Tail is that organizations don’t have to be dependent upon a few megahit products or services that appeal to large mass markets to be successful. In fact, organizations can be very successful through a business strategy of identifying, operationalizing and monetizing low volume or low demand product or customer segments.
A long tail distribution exists when a large percentage of your customers (50%+) are located away from the “head” or the central part of the distribution curve. A long-tail distribution arises when segmenting your customer population across a large number of variables, dimensions or characteristics, which increases the magnitude of the distribution skew (see Figure 3).
Figure 3: Monetizing the Long Tail
In the blog “Monetizing the Long Tail of Customers”, I discussed how Big Data changes the economics of monetizing the long tail of customers in the following ways:
- More granular, individualized customer profiling models (e.g., developing Analytic Profiles at the individual customer and product categories levels) allows organizations to better predict individual customers behavioral propensities in order to match messaging, offers, and product promotions to the exact needs of the individual customer
- Dramatically lowering the costs to target and serve customers via mobile or web apps that provide an easy-to-learn, easy-to-use, and easy-to-remember customer experience
- Delivery of highly personalized prescriptive recommendations and offers leveraging improved insight into customer and product propensities, behaviors, tendencies, and preferences
- Leveraging graph analytics to uncover customer relationships (e.g., strength and direction) that can be used to improve use of social and digital media to build advocacy and referrals that drive low-cost, loyalty-based customer engagement and sales
- Leverage of real-time analysis and location-based services to improve the timeliness of marketing messages and product offers—delivering offers at the time that the customer is ready to use or respond to that offer
The ramifications of mastering the long tail is a dramatic growth in your organization’s Total Addressable Market (see Figure 4).
Figure 4: Big Data-enabled Total Addressable Market (TAM)
Now you are probably asking why is this conversation about the “Tail” customers relevant to the discussion of Ethical AI? Because the ability to monetize the Tail customers – or the False Negative customers that are missed due to AI model confirmation bias – is the key in motivating organizations to embrace ethical AI in order to expand their TAM (and make more money!!).
Ah yes, the carrot of making more money is always more persuasive than the stick of ethical guilt.
Summary: The Economics of Monetizing False Negatives
Creating AI models that overcome confirmation biases takes a lot of upfront work, and some creativity. That work starts by 1) understanding the costs associated with the AI model’s False Negatives and 2) building a feedback loop where the AI model is continuously-learning and adapting from the False Negatives.
The instrumentation and measuring of False Negatives - the job applicants you did not hire, the consumer to whom you did not make an offer, the student applicant you did not admit, the customer to whom you did not grant a loan - is particularly hard, but possible. But think about how an AI model learns - you label the outcomes, and the AI model continuously identifies and quantifies the variables and metrics that are predictors of those outcomes. If you don't feedback to the AI models labelled outcomes where the models predicted incorrectly (False Positives and False Negatives), then the model never learns, never adapts and misses market opportunities.
“If your AI model doesn't take into consideration the ultimate outcomes of the AI model's False Negatives, then confirmation bias in the AI model could set in and eventually the company's Total Addressable Market (TAM) could shrink to a point where the business might no longer be viable.”
Bottom-line: there are significant financial ramifications for not instrumenting, measuring and learning from the AI model's False Negatives. And while government regulations and societal pressure might force organizations to address the confirmation biases built into their AI models, I prefer to appeal to a more basic need – money. Embracing Ethical AI and removing AI model confirmation biases can help organization’s expand their Total Addressable Market, reach and monetize new customer segments, and make more M-O-N-E-Y!
Figure 5: Cartoon courtesy of Timo Elliott
Does Ethical AI have your attention now?
Helping Sales Teams W(in) | Human | Learner | Helper | Connector | Data and ML
3 年Wow - great perspective here. Value of xai is many Mike Walsh Krishna Gade
Founder, AI Cognitia
3 年Great paper demystifying some common misconceptions in business analytics.
AI Results Without Resistance. ROI in 8 Weeks. Follow me to Hack the Change Curve. Keynote speaker who talks about the psychology of hacking change in AI Adoption & Transformation. #SystemsThinking #Generalist
3 年I like this take on it, Bill Schmarzo. Change requires either touching people’s heart strings or their purse strings.