Usefully Wrong – The Problem with Generative AI
Image source: OpenAI

Usefully Wrong – The Problem with Generative AI

For the past decade, the tech world has been in a desperate search for the “next big thing.” PCs, the web, smart phones, and the Cloud have all sailed past their hype curve and settled out into commodities, new technology is needed to excite the consumer and liberate that sweet, sweet ARR.

For awhile, we thought maybe it was Augmented Reality — but Google only succeeded in making “Glassholes” and Microsoft’s Hololens was too clunky to change the world. Then we had 2022’s simultaneous onslaught of “metaverse” and “crytpo”, both co-opted terms leveraged to describe realities that proved to be entirely underwhelming: crypto crashed, and the metaverse was just Mark Zuckerberg’s latest attempt at relevance under a veneer of Virtual Reality (Hey Mark, the 90s called and wanted you know that VR headsets sucked then, and still suck now!)

But 2023 brings a new chance for a dystopian future ruled by technology ripe to abuse the average user. That’s right, Chat is back, and this time its with an algorithm prone to hallucinations!

The fact is, we couldn’t be better primed to accept convincing replies from a text-spouting robot that can’t tell fact from fiction: we’ve been consuming this kind of truthiness from our news media for the past 15 years! And this tech trend seems so great that two of the biggest companies are pivoting themselves around it…

Microsoft, while laying off thousands of employees from unrelated efforts, is spending billions with OpenAI to embed ChatGPT in all their major platforms. Bing always wanted to be an “answers engine” instead of a search engine, now it can give “usefully wrong” answers in full sentences! Developers can subscribe to OpenAI access right from their Cloud developer portal. Teams (that unholy union of Skype and SharePoint) can leverage AI to listen to your meetings and helpfully summarize them. And who wouldn’t want a robot to write your next TPS Report for you in Word, or spruce up your PowerPoints?

Google, who had been more cautious and thoughtful in their approach, is now full steam ahead trying to catch up. Google’s Assistant — already bordering on invasive and creepy — has been reorganized around Bard, their less-convincing chat AI that still manages to be confidently incorrect with startling frequency.

The desperation is frankly palpable: the tech world needs another hit, so ready or not, Large Language Models (LLMs) are here!

That everyone on the inside is fully aware that this technology is not done baking is entirely lost on the breathless media, and a new generation of opportunistic start-ups looking to capitalize on a new wave of techno-salvation. ChatGPT 4 really is impressive in its ability to form natural sounding sentences, and most of the time, it does a good job in drawing the correct answer out of its terabytes of training material. But there’s real risk here, when we conflate token selection with intelligence. The AI is responding to itself, as much as the user, trying to pick the next best word to put into its reply — its not trying to pick a correct response, just one that sounds natural.

Like most technology, the problem is that the average user can’t tell when they’re being abused. YouTube users can’t tell when an algorithm is taking them down a dark path — they’re just playing the next recommended video. Facebook users can’t tell when they’re re-sharing a false narrative — they just respond to what appears in their feed. And the average ChatGPT user isn’t going to fact check the convincing sounding response from the all-intelligent robot. We’ve already been trained to accept the vomit that the benevolent mother-bird of technology force-feeds us, while we screech for more…

No alt text provided for this image
I have to prove I'm a human before I'm allowed to talk to an AI. Does that count as irony?

I know, that sounds harsh. I’m not really saying that ChatGPT, Bard and other generative AI should go away — the genie is out of the bottle, so there’s nothing that can be done about that. I’m saying that we need to approach this technology evolution not with awe and wonder and ignorance, rushing to shove it into every user experience. We need to learn from the lessons of the past few decades, carefully think through the unintended consequences of yet-another-algorithm in our lives, spend time iterating on its flaws, above all, treating it not as some kind of magic, but as a tool, that used intelligently, might help accelerate some of our work.

Neil Postman’s 1992 book “Technopoly” has the subtitle “The Surrender of Culture to Technology.” In it, he asserts that when we become subsumed by our tools, we are effectively ruled by them. LLMs are potentially useful tools (assuming they can be taught the importance of accuracy), but already we’re speaking of them as if they are a new form of intelligence — or even consciousness. A wise Jedi once said “the ability to speak does not make you intelligent.” The fact that not even the creators of ChatGPT can explain exactly how the model works doesn’t suggest an emergence of consciousness — it suggests we’re wielding a tool that we do not fully understand, and should thus exercise caution in its application.

When our kids were little, we enjoyed camping with them. They could play with and learn from all the camping tools and equipment except the contents one red bag, which contained a hatchet, a sharp knife, and a lighter; we called it the “Danger Bag” because it was understood that these tools needed extra care and consideration. LLMs are here. They’re interesting, they have the potential to help us, and to impact the economy: already new job titles like “Prompt Engineer” are being created to figure out how to best leverage the technology. Like any tool, we should harness it for good — but we should also build safeguards around its misuse. Since the best analogies we have to technology like this have proved harmful in ways we didn’t anticipate, perhaps ChatGPT should start in the “Danger Bag” and prove its way out from there…

Ryan Cahalane

Managing Partner Axiom | Industry 4.0 | Digital Transformation | Manufacturing Technology | Advisor | Board Member

1 年

i'm still snickering at "Teams (that unholy union of Skype and SharePoint)"...

Ryan Cahalane

Managing Partner Axiom | Industry 4.0 | Digital Transformation | Manufacturing Technology | Advisor | Board Member

1 年

Well written and fully agree… wait, or should I ask if chatGPT helped you write this? trying to be a person these days is sure complicated

Great article Jonathan! It’s vital that we look beyond the hype and the breathless ‘screeching,’ and the relentless search for the next wundertech - and understand how they’re reshaping our thinking…

要查看或添加评论,请登录

Jonathan Wise的更多文章

  • Reflections on ProveIt 2025

    Reflections on ProveIt 2025

    This month, I was challenged to attend the ProveIt! Conference. To be honest, I resisted attending.

    34 条评论
  • Generating Intelligence

    Generating Intelligence

    “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —…

    3 条评论
  • On Protocols and Information Models for Smart Manufacturing

    On Protocols and Information Models for Smart Manufacturing

    When you work for a hardware vendor, discussions about protocols are relatively simple: the in-house protocol is prime,…

    32 条评论
  • Leading Lean Agile Teams - or, How to Go Fast and Stay Agile

    Leading Lean Agile Teams - or, How to Go Fast and Stay Agile

    While this might be full of now well-established clichés, there’s a lot of flavors of Agile being discussed these days,…

    1 条评论
  • Why manufacturing information software doesn't work... and how to fix it.

    Why manufacturing information software doesn't work... and how to fix it.

    In the 90s, the world of software development pivoted around a concept called “Object Oriented Programming.” The…

    8 条评论
  • Creating Innovative Product Teams

    Creating Innovative Product Teams

    My manager asked me recently to sum up some of the lessons learned in the past 16 months as we spun up, then executed…

    7 条评论
  • Lessons from the Consumer IOT

    Lessons from the Consumer IOT

    In my previous post, I alluded to the Consumer IoT, and the expectations it creates for connected devices. Most of us…

    1 条评论
  • Analytics for Industrial Operations

    Analytics for Industrial Operations

    Recently, I attended Cisco’s Data and Analytics Conference, where Jim Green, their CTO for Analytics, spoke about…

    1 条评论

社区洞察

其他会员也浏览了