Generative AI: Do We Need the Ultimate Bullsh!t Machine?
Jason Bloomberg
Managing Director at Intellyx > Top Digital Transformation, Cloud Native Computing, Low-Code, and DevOps Influencer
I’ve been through a lot of explosive tech innovations in my career. The World Wide Web. eBusiness. Web Services. Cloud Computing.
But one explosion dwarfs all the others: generative AI, and its mascot ChatGPT.
Why is generative AI so explosive? Paradigm shift? Game changer? Transformative tech?
Yada yada yada.
The REAL reason ChatGPT hit the market with such a gut-clearing explosion is because it’s bullsh!t.
Not only that, it’s the best bullsh!t we’ve ever seen. And what do we all want more than some really, really good bullsh!t?
It’s time to call bullsh!t on generative AI.
Telling Sh!t from Shinola
How do you make really good bullsh!t? You come up with AI that can generate plausible results rather than accurate ones.
Plausibility gives bullsh!t its shine. The more plausible, the more people like it and buy in. It can write essays! Generate marketing copy! Pass bar exams! Why? Because the output is plausible.
But underneath, generative AI is just stringing sentences together with no clue whatsoever what they mean or what it’s saying. It’s a bullsh!t generator, full stop.
Agree? Disagree? It doesn’t matter. Questioning whether or not generative AI’s output is bullsh!t misses the point. The real questions are: why do we like the bullsh!t so much? And even more importantly, is there a problem with accepting so much of it?
Why We Love Bullsh!t So Much
Humans, in fact, love to be fooled. Whenever we’re faced with a situation where bullsh!t can substitute for reality, we’re all in.
Take pareidolia , for example – the phenomenon where humans see faces or other patterns in random things. Whether it’s the Man in the Moon or Jesus in your morning toast, we love to see specific patterns where there aren’t any.
And then there’s the ELIZA effect . ELIZA was a chatbot from 1966 – read that again, fμcking 1966 – that simulated a psychotherapist’s responses. People absolutely fell in love with its bullsh!t.
Speaking of bullsh!t, how about the Turing Test itself? Poor Turing. Not only did Her Majesty’s government chemically castrate the poor fellow, but techies have been misunderstanding the point of Turing’s Imitation Game ever since he proposed it.
Turing simply wanted to posit a measurable quantity that could substitute for machine intelligence, which he presumed was too difficult to measure. But ever since, people have recast the Turing Test as a challenge to see which AI could produce the best bullsh!t. Cue rolling over in his grave.
Now That We Have All this Bullsh!t, Do We Really Want it?
Sometimes bullsh!t is what we want. Generative AI is getting quite good at writing fiction. Would you want to read it? Perhaps.
In other cases – including almost all business situations – we require veracity rather than bullsh!t. Truthfulness and accuracy are fundamental requirements for all business data, after all. Nobody wants to make a business decision based upon data that might very well be bullsh!t. That’s no way to run an organization.
领英推荐
The problem with generative AI, therefore, isn’t that it generates bullsh!t. The problem is that we can’t always tell the difference – and telling the difference between bullsh!t and reality is getting harder and harder.
Every human has a bullsh!t meter, of course. We go through our lives with an intuitive sense of whether some information is worth trusting or simply bullsh!t. Our bullsh!t meters also work in those gray areas where we’re not sure just how much of what we’re hearing is bullsh!t.
Some meters work better than others. Is your Uncle Bob lying to you or not? Did that “based on a true story” car chase movie scene happen in real life? Maybe or maybe not, but everybody has a healthy dose of skepticism to bring to bear.
The problem with generative AI is its creators have designed it to fool our bullsh!t meters. By optimizing output for plausibility, we poor humans can no longer rely upon our innate and learned skills for separating bullsh!t from reality.
And that’s when we find ourselves in really deep sh!t.
How to Get Out of this Sh!t
The most important thing to remember is that as with all other businesses, the AI business is demand-driven. Vendors won’t build something that people don’t want (or at least, once the VC money runs out).
As long as people demand bullsh!t from AI vendors, the vendors will keep creating technologies that deliver it. Furthermore, they will get better and better at it.
The same goes for content producers. As long as people want bullsh!t content, whether it be AI-generated novels, fake news, or marketing copy, then the producers of such content will peddle it.
It’s therefore up to us consumers of bullsh!t to make it clear when we want bullsh!t and when we don’t. And we must also let the vendors and content producers know that we need them to help us tell the difference.
We already see the former requirement in action, as enterprise customers demand veracity from their AI solutions. Plausibility is good, but cut the bullsh!t, please.
Even when we want AI to write fiction, we still want to know whether that novel we’re reading is human or AI-generated.
What the market is missing is a clear demand to both vendors and content producers that we want them to make it clear whether some piece of content is AI-generated and hence bullsh!t on some level.
See the disclaimer at the bottom of this article? We’ve been pointing out since early 2023 that we don’t use AI to generate content. Now you know why. This article, at the very least, is not bullsh!t.
The Intellyx Take
Some bullsh!t stinks more than other bullsh!t. It’s up to you as the consumer to judge just how stinky.
An AI-generated resume with a hallucinated job history is one thing. AI-generated malware is an entirely different class of bullsh!t.
When the large language models behind offerings like ChatGPT troll the web for training content, then is the bullsh!t they produce plagiarism? Just how much of an original source must appear in the output for it to be theft?
My favorite angle on AI bullsh!t comes from Maggie Harrison, writing for Futurism . She points out that: “Basically, Silicon Valley’s new star is just an automated mansplaining machine. Often wrong, and yet always certain — and with a tendency to be condescending in the process. And if it gets confused, it’s never the problem. You are.”
Mansplaining? Now that’s some smelly bullsh!t.
Copyright ? Intellyx LLC. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. As of the time of writing, none of the organizations mentioned in this article is an Intellyx customer. No AI was used to produce this article, and that’s no bullsh!t. Image credit: Filipe Ramos .