Why Wile E Coyote is the AI risk-assessor we need right now.
Did you grow up with the misadventures of poor old Wile E Coyote? Always running into a refreshing oasis that’s really gaps between cliffs, Road Runner meep-meeping off into the desert, all self-assured smugness? Well, Wile E would often hallucinate. My metaphor is simple. But it has some merit in explaining some of today's risks for simplistic AI adoption.
AI hallucinations are one of the biggest problems with the agenda-grabbing tech. The challenge is vast. But it’s not getting the attention it deserves. And that’s a problem, because an over-eager adoption of tech without checks and balances puts brands at risk.
Brands who pride themselves on quality, authoritative work risk over-reliance on today's generative AI. If your words matter, if the quality and authority of your brand is valuable, listen to Open AI. Keep investing in humans alongside your journey into AI. Don't bake hallucinations into your brand.
Generative AI 101
Tools such as ChatGPT are generative AI. Their outputs are possible because Large Language Models (LLMs) are trained on huge amounts of the internet — newspapers, scientific journals, and maybe even one of my (or your!) old writing projects. Engineers create models that allow words, images, and sounds to make — to generate — something new.
AI Hallucinations
Back Wile E. The heat of the desert, his desire to win and unending hunger for Road Runner would mix and mash to create hallucinations.
If you’re in a desert, your hallucination might be some palm trees and fresh water. Or you might hallucinate your bestie in the corner of your eye, but you know she's hard at work — and her office is 5,000 miles and three time zones away.
Like the oasis mirages in your Wile E. Coyete Saturday morning cartoons, an AI hallucination is when an LLM sees various bits of data and smooshes it together because it looks about right. Even when it’s wrong.
Or, according to Notion, AI Hallucinations are when “artificial neural networks create images that are not actually present in the data.”
The risks are high for Wile E. They’re also risky for our businesses and our brands. If we're running as quickly after recession-proofing tactics and exciting new tech, we run the risk of avoiding our own hallucinations. And ending up with our proverbial faces in the ditch. Errors, mistakes and bias, harder baked into our brands.
These challenges aren’t simply AI generated mistakes. AI tech from companies such as Microsoft and Open AI use language that suggests authority — that encourages users to trust what they see.
When information is packed in discrete, authoritative chunks, accepting the information we’re served is frictionless. And that’s where the problem lies.
My own private AI Hallucination
Like the human data the LLM is trained on, AI can be wrong. Here's what happened to me:
Aside from my hilarious use of all caps, we can see the two problems:
And herein lies an essential problem to overestimating the risks of AI hallucinations.
Find the real risks
ChatGPT’s words are simple, but misleading: “The episode you are referring to is …”?
“Is” is a verb. A verb is an action word. It’s doing. It’s active. And, as every good copywriter knows, using “active” words is a current trend because it helps the reader to buy what you’re selling.
领英推荐
OpenAI encourages users to put faith into their outputs. That’s a choice. Arguably, it’s a poor choice. But it’s still a choice.
The model then tells me about the attorney, and the story. Freakonomics Season 6, Episode 1, is 'Ten Ideas to Make Politics Less Rotten'.
But I'd trusted GPT, and I wanted the pull-quote from the episode. I spent half an hour trying to find this episode, getting into a rabbit hole of Wall Street and Washington federal litigators.
So, my friend asked GPT again, checking for Hidden Brain. Again, the episode of Hidden Brain, 'Creating God' (Season 10, Episode 1) also has nothing to do with American prosecutors.
Imagine that error applied at scale. Errors in your brand. In employment contracts or compliance training.
And, imagine you’re a writer who used to have a day to research and write a 1,000 word piece of rich content—but now you’re given an hour to fact-check for quotes and statistics in a 1,000 word blog post. Checking for bias. Adding SEO (while it exists) and proofing for tone of voice. It’s simply not possible to do it all, and get it all right. For brands that value their equity and their customer’s time, it’s not a risk worth taking.
Programmers could make a simple tweak: “The episode you are looking for could be …” This change is more honest. It reminds users to fact-check outputs —to keep engaging their critical thinking — as they work.?
Using AI is part of a toolkit. It's not an end-to-end solution.
Friction creates electricity
And yet, companies from BT to IBM are announcing AI-related lay-offs. The cost of human knowledge is set to become radically cheaper. But let's avoid euphemisms: generative AI makes it easier for companies to save money by automating human work.?
But not all work should be automated.?
Engineers might see the writer’s insistence on the importance of words as a distraction from solving a company’s problem. But removing friction isn’t a good idea. It’s a false paradise.
Not every human process can or should be completely optimised. Friction isn’t always a problem. Friction creates electricity. Pearls. Brilliance and beauty are created in the mess and muddle of friction.
The beauty and brilliance of engineering isn't up for debate. Engineers help us to grow. Engineers created printing presses, which made it possible for books and words to be available to the many, rather than just the few.
Engineers created printing presses, so paper money was possible. Then, we could trade across our countries not just our towns. Engineers created tin cans so we could preserve food outside of harvest season — being part of the puzzle that allowed humans to reach the Arctic (acknowledging bias: shout out to great-great-great-great granddad Donkin for mechanising the tin-canning process, that printing press and and for the giant family forehead).?
But the obsession with efficiency and removing humans begs the question: what’s all this in service of? In this capitalist world, where will we practise and nurture our skills? How will the many benefit, and not just the few? The partnership of engineering, ethics and people increases in importance. But when the biggest tech firms are reducing their ethics teams, we have a right to be worried.
AI needs to be the tool that helps humans to deliver exceptional work. And employers have the responsibility to know the opportunities and the limitations of this truly incredible tech.
Everything is capable of error. Slippage into a mindset where our reflex trusts outputs over rigorous critical thinking could bake errors into every level of our thinking.
Human first; AI as helper
In bringing AI into your teams and workplaces, it’s essential to retain the human. Holding space for friction. Using the potential of AI as your ‘starter for 10’ while valuing and investing in the inherent knowledge of your workforce of staffers and freelancers.
It's in relationships. The creation of people, coming together. Creating. Disagreeing. Debating. Building something new together.
You need people who bring critical thought and confidence to use these tools without deference. You need to create systems that prioritise humans while supporting their development. Otherwise you run the risk of baking hallucinations into your business, one blog post, one training, one deck at a time.?
Would you rather be the Road Runner, using tools calmly, strategically to reach your goal? Or would you rather Wile E., running so fast you don't make sure that the ground underneath your feet is all it seems to be? The choice is yours.
Venture Catalyst & Anti-Hustle Strategist | Startups Magazine ‘Most Inspiring Women’ ’22 & ’23 | Keynote Speaker | Neurodiverse Leader
1 年This is all true ??, which is why industries that can’t afford for misinformation to go out (eg law, sustainability, science, etc) should shout about the risks of AI hallucinations as a way to defend. At the same time, it’s important to remember the thing about AGI is that it gets better over time and each input provides training data. For example, when you look at headshot images created by Photo.ai in February, when it launched, compared to now and it’s mind blowing. Almost 90% accurate and getting more precise every day. So the biggest challenge re: hallucinations is the assumption that AI-powered tools will always hallucinate.
I connect large groups of people together ??Facilitator | Moderator | Experience Designer
1 年Thank you for making this so bloody clear to understand Ann! Bang on!!
Lynda with a Why ?? Relentlessly curious ?? The research I do for organisations informs change in #construction #fuelpoverty and #climate. Both freelance and part time employed.
1 年This is incredible Ann (super well researched and SO articulate), "The cost of human knowledge is set to become radically cheaper" made me gasp - thank goodness you argue against this. Thank you for taking the time to write this.