How to get across the AI chasm

How to get across the AI chasm

Have you ever walked into a restaurant thinking that a meal would change your life?

Or perhaps believed that buying a certain car would make you Max Verstappen?

Chances are: “no” — and if you did, you were probably immensely disappointed with the outcome and chose to not hire that product again.


Before we start!

If you like this topic and you want to support me:

  1. Comment on the article; LinkedIn appreciates that and it will really help spread the word ??
  2. Connect with me on Linkedin ??
  3. Subscribe to TechTonic Shifts to get your daily dose of tech ??
  4. Checkout my new book: the Machine Learning Book of Knowledge ??
  5. If you have a crazy project idea and want to share it? Book 30 minutes. ??https://calendly.com/marco_van_hurne/happy-hour-hacks ??
  6. If you like my writing, and want to support my work - buy me a coffee ??


The Artificial Intelligence Chasm

With AI, that’s how many people think. This creates a significant gap — a distance between what’s promised and what’s achievable. In turn, this gap fosters an abyss of frustration and despair. Yes, AI is impressive, but it’s not that miraculous, it won’t make you a race driver, and it won’t change your life, at least overnight. Failing to manage expectations can upset your customers and might even cost you your job.

This is the AI Chasm and in 2024 most of us are trying to cross it.

To do so we need to ensure expectations are met. This means that what the product is capable of doing is what the customers thinks it is capable of doing. Throughout the past year I’ve heard my customers say:

  • “I trust AI, not humans”
  • “I don’t understand how the model can’t detect this”

These statements stem from a significant misconception of this technology, which, unfortunately, will take its time for the general audience to understand.

The problem is, that most of us don’t have this time, we need to delight customers today, not when they decide to study Machine Learning.

The responsibility now falls on an AI Product Manager to bridge this gap, ensuring alignment within the company and fostering transparent communication with customers. Honesty should prevail over the temptation to deceive, establishing a foundation for long-term success.

Today I’ll be sharing 3 metaphors that I’ve used with my clients to help them understand the limitations of AI, I hope they’ll help you.

Intelligence is not really intelligent

A black box with limited Intelligence

AI’s intelligence is limited; it functions comparably to a sleepwalking person, with computational power only a fraction of the human brain’s. If it makes mistakes, it doesn’t know why, nor can it explain why it did something. There have been extensive studies to understand why AI gave a certain output, but for most (if not all) uses, it still remains a black box.

  • AI’s advantage lies in its singular focus, contrasting with humans who juggle various tasks like eye-hand coordination, speech, assimilation, and sensory perception, thus requiring additional computational power.

A mere prediction machine

AI models are simply prediction machines . They’re not sentient nor do they know how to reason. In the case of ChatGPT, it is just extremely good at predicting the next words for any given context, and it is by doing this so well that many people are misled to believe it possesses actual intelligence.

Explaining this: I got good results by using the sleepwalking reference, since most people could connect with the idea. Tie it to your use case for maximum gains. Example: “AI is like a sleepwalking book expert who knows exactly what each book is about but doesn’t know why it recommended certain books to go with it”. — for a book retailer website recommender system.

No one is perfect

Mistakes will happen

AI will make mistakes even a 5-year-old wouldn’t and will also be able to do things a veteran wouldn’t be able to do, all in the same type of problem.

  • Example: it can classify seemingly impossible images and can also miss very simple ones.

Why? It’s difficult to give a straight answer. But AI doesn’t learn the way we do. It doesn’t make the same associations. It learns different patterns and infers based on those. Some patterns may be more visible in images that are easier for a human to interpret or not.

Performance shifts

It can be working very well one moment and completely derail on the next.

  • This is especially true for data drifts. A very “intelligent” model can start to completely fail its predictions overnight. This happened when COVID-19 hit, which completely changed patterns in human lives, and AIs weren’t trained on those new patterns nor could understand them. This also proves, AI is only as good as its data.

It will never be perfect

The self-driving industry has been fighting this one for years. We have spent billions trying to maximize performance but it just keeps falling short of the “ideal” value. I wouldn’t be surprised if in a few years we realize the way we were approaching AI was impossible to achieve perfect results.

Explaining this: AI is akin to a forever-young puppy learning to ring a bell to go outside — eager and often accurate, but prone to lapses. Like this eternally learning pup, AI performs well in in familiar settings but especially falters in new ones, as if seeing the bell for the first time in a stranger’s home. This perpetual state of learning ensures it remains brilliant yet fallible, capable of astonishing feats within known confines while still occasionally missing the simple cue of a different bell in an unfamiliar environment.

It’s not that straightforward

Harder as you improve

The higher the AI’s current performance, the harder it will be to improve it.

It’s generally easy to reach around 80% performance. Above that, you will face long-tail issues (edge cases) and other problems that will make each percentage increase seem impossible.

  • Because of this, it may not be worth trying to achieve higher performance. Sometimes, a 90% performance is good enough for the use case and workflow and trying to achieve the 99% might be the difference between going down a rabbit whole and investing in some other projects with higher Return On Investment (ROI).

Data is king but more data isn’t always better

Without good data, there can’t be good models. Machine Learning is only as good as the data it gets.

If it gets good data, we might get good results, but if we get bad data we will definitely get bad results.

Prioritize getting the right data at all costs. It’s not that straight forward sometimes. Especially important for cases where the “tail” is long.

Good data isn’t just the one used to train your models. Production performance data can be equally as important since it allows you to know in which direction to go. Depending on your use case, this data can be excruciatingly hard to obtain. A good AI PM needs to lobby for good feedback loops or even considering changing workflows to allow for production feedback.

Explaining this: AI improvement is like climbing a mountain — the higher you go, the thinner the air and the tougher each step becomes. Initially, the climb is steep but manageable. Beyond a certain point, the terrain becomes treacherous, each additional step seems impossible. Sometimes, reaching the 90% plateau offers a view beautiful enough for your purpose.

In this journey, data is the compass and provisions, essential for finding your path and sustaining the climb. Yet, more isn’t always better — like carrying too much weight can slow you down. Production performance data is akin to having the correct map to know which tracks to follow.

When to manage expectations

  • Conduct regular presentations at all-hands or company events to reinforce key points. Be transparent about capabilities and limitations. Have a special focus for customer success and sales people.
  • Address instances where misconceptions arise; educate individuals on the realities of AI capabilities and understand the origins of misinformation. Do this emphatically and often.
  • Involve stakeholders early and often.
  • Compare your model's performance with benchmark-able ones (if possible) — this will help you prove no one else is perfect.

Bonus tip

Meeting expectations is the foundation for long-term success. But in order to give that “5-star” feeling, that over-the-top experience, the value perception should always be lower than what is delivered. Meaning you should always have something “in your pocket” to delight customers with. That’s what will make them come back again and again. So, find something in your product that is a delighter and deliver it to the clients without them expecting it. A delighter is low effort but with high perceived value.

This is an old tactic, but few companies manage to do it well. Most luxury brands rely on this to keep clients happy. Their prices are high, but the customer always leaves the “trade” thinking it got more than what it paid for, therefore it comes back for more.

In Product this gave rise to a prioritization technique called the “Kano Model”. The illustration below shows how delighters bring a very high satisfaction relative to it’s implementation cost.

End note

Remember, this is an ongoing battle. Despite daily magic-like social media posts, AI is nowhere near being perfect. Successfully managing expectations will help you cross this chasm, and metaphors can go a long way. Your company will shift from selling empty promises to selling tech that works. Combine it with delighters, and you’ll keep your customers happy. At the center of all this are Product Managers; they are the key.


Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.

Signing off - Marco



Top-rated articles:


Walter Robertson

Director of Innovation and Commercialization

5 个月

Marco van Hurne This is one of the classic areas of confusion related to technology adoption and crossing the chasm. Technologies like AI don't cross the chasm...it is the specific application or use case of a technology that crosses the chasm. (A correct example might be: "has AI for management of stage-4 kidney disease in western Canada" crossed the chasm) Always note that each geographical location is different in terms of adoption rate. There's a recent survey that describes this and other common misunderstandings: https://www.hightechstrategies.com/chasm-crossing-confusion/?This article was authored by the original creator of the chasm concept, before the book was written.

回复
Sandra Bihari

Shall we make a difference together???????????

5 个月

Hey good morning Marco, what a wonderful article!!! I almost want to say the best so far, but you continue to amaze me with your articles. I'm curious what your next one will be and I'm already looking forward to it??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了