How to get across the AI chasm
Marco van Hurne
Architect of AI solutions that improve business efficiency and client engagement.
Have you ever walked into a restaurant thinking that a meal would change your life?
Or perhaps believed that buying a certain car would make you Max Verstappen?
Chances are: “no” — and if you did, you were probably immensely disappointed with the outcome and chose to not hire that product again.
Before we start!
If you like this topic and you want to support me:
The Artificial Intelligence Chasm
With AI, that’s how many people think. This creates a significant gap — a distance between what’s promised and what’s achievable. In turn, this gap fosters an abyss of frustration and despair. Yes, AI is impressive, but it’s not that miraculous, it won’t make you a race driver, and it won’t change your life, at least overnight. Failing to manage expectations can upset your customers and might even cost you your job.
This is the AI Chasm and in 2024 most of us are trying to cross it.
To do so we need to ensure expectations are met. This means that what the product is capable of doing is what the customers thinks it is capable of doing. Throughout the past year I’ve heard my customers say:
These statements stem from a significant misconception of this technology, which, unfortunately, will take its time for the general audience to understand.
The problem is, that most of us don’t have this time, we need to delight customers today, not when they decide to study Machine Learning.
The responsibility now falls on an AI Product Manager to bridge this gap, ensuring alignment within the company and fostering transparent communication with customers. Honesty should prevail over the temptation to deceive, establishing a foundation for long-term success.
Today I’ll be sharing 3 metaphors that I’ve used with my clients to help them understand the limitations of AI, I hope they’ll help you.
Intelligence is not really intelligent
A black box with limited Intelligence
AI’s intelligence is limited; it functions comparably to a sleepwalking person, with computational power only a fraction of the human brain’s. If it makes mistakes, it doesn’t know why, nor can it explain why it did something. There have been extensive studies to understand why AI gave a certain output, but for most (if not all) uses, it still remains a black box.
A mere prediction machine
AI models are simply prediction machines . They’re not sentient nor do they know how to reason. In the case of ChatGPT, it is just extremely good at predicting the next words for any given context, and it is by doing this so well that many people are misled to believe it possesses actual intelligence.
Explaining this: I got good results by using the sleepwalking reference, since most people could connect with the idea. Tie it to your use case for maximum gains. Example: “AI is like a sleepwalking book expert who knows exactly what each book is about but doesn’t know why it recommended certain books to go with it”. — for a book retailer website recommender system.
No one is perfect
Mistakes will happen
AI will make mistakes even a 5-year-old wouldn’t and will also be able to do things a veteran wouldn’t be able to do, all in the same type of problem.
Why? It’s difficult to give a straight answer. But AI doesn’t learn the way we do. It doesn’t make the same associations. It learns different patterns and infers based on those. Some patterns may be more visible in images that are easier for a human to interpret or not.
Performance shifts
It can be working very well one moment and completely derail on the next.
领英推荐
It will never be perfect
The self-driving industry has been fighting this one for years. We have spent billions trying to maximize performance but it just keeps falling short of the “ideal” value. I wouldn’t be surprised if in a few years we realize the way we were approaching AI was impossible to achieve perfect results.
Explaining this: AI is akin to a forever-young puppy learning to ring a bell to go outside — eager and often accurate, but prone to lapses. Like this eternally learning pup, AI performs well in in familiar settings but especially falters in new ones, as if seeing the bell for the first time in a stranger’s home. This perpetual state of learning ensures it remains brilliant yet fallible, capable of astonishing feats within known confines while still occasionally missing the simple cue of a different bell in an unfamiliar environment.
It’s not that straightforward
Harder as you improve
The higher the AI’s current performance, the harder it will be to improve it.
It’s generally easy to reach around 80% performance. Above that, you will face long-tail issues (edge cases) and other problems that will make each percentage increase seem impossible.
Data is king but more data isn’t always better
Without good data, there can’t be good models. Machine Learning is only as good as the data it gets.
If it gets good data, we might get good results, but if we get bad data we will definitely get bad results.
Prioritize getting the right data at all costs. It’s not that straight forward sometimes. Especially important for cases where the “tail” is long.
Good data isn’t just the one used to train your models. Production performance data can be equally as important since it allows you to know in which direction to go. Depending on your use case, this data can be excruciatingly hard to obtain. A good AI PM needs to lobby for good feedback loops or even considering changing workflows to allow for production feedback.
Explaining this: AI improvement is like climbing a mountain — the higher you go, the thinner the air and the tougher each step becomes. Initially, the climb is steep but manageable. Beyond a certain point, the terrain becomes treacherous, each additional step seems impossible. Sometimes, reaching the 90% plateau offers a view beautiful enough for your purpose.
In this journey, data is the compass and provisions, essential for finding your path and sustaining the climb. Yet, more isn’t always better — like carrying too much weight can slow you down. Production performance data is akin to having the correct map to know which tracks to follow.
When to manage expectations
Bonus tip
Meeting expectations is the foundation for long-term success. But in order to give that “5-star” feeling, that over-the-top experience, the value perception should always be lower than what is delivered. Meaning you should always have something “in your pocket” to delight customers with. That’s what will make them come back again and again. So, find something in your product that is a delighter and deliver it to the clients without them expecting it. A delighter is low effort but with high perceived value.
This is an old tactic, but few companies manage to do it well. Most luxury brands rely on this to keep clients happy. Their prices are high, but the customer always leaves the “trade” thinking it got more than what it paid for, therefore it comes back for more.
In Product this gave rise to a prioritization technique called the “Kano Model”. The illustration below shows how delighters bring a very high satisfaction relative to it’s implementation cost.
End note
Remember, this is an ongoing battle. Despite daily magic-like social media posts, AI is nowhere near being perfect. Successfully managing expectations will help you cross this chasm, and metaphors can go a long way. Your company will shift from selling empty promises to selling tech that works. Combine it with delighters, and you’ll keep your customers happy. At the center of all this are Product Managers; they are the key.
Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.
Signing off - Marco
Top-rated articles:
Director of Innovation and Commercialization
5 个月Marco van Hurne This is one of the classic areas of confusion related to technology adoption and crossing the chasm. Technologies like AI don't cross the chasm...it is the specific application or use case of a technology that crosses the chasm. (A correct example might be: "has AI for management of stage-4 kidney disease in western Canada" crossed the chasm) Always note that each geographical location is different in terms of adoption rate. There's a recent survey that describes this and other common misunderstandings: https://www.hightechstrategies.com/chasm-crossing-confusion/?This article was authored by the original creator of the chasm concept, before the book was written.
Shall we make a difference together???????????
5 个月Hey good morning Marco, what a wonderful article!!! I almost want to say the best so far, but you continue to amaze me with your articles. I'm curious what your next one will be and I'm already looking forward to it??