Will AI Make Us Dumber?
If, like me, you are "of a certain age," you remember when fixing cars was easy, making long-distance phone calls was expensive, TV only had three channels...you get the idea.
Having the sum of human knowledge, and instantaneous communication, and all the TV and video games and assorted ways to marinate your eyes and brain in UV light and mindless pap and pabulum you could ever wish for right at your fingertips, was the stuff of science fiction rather than an indispensable, integral part of our waking reality that many if not most people cannot function without.
All the new innovations from GPS to Wi-Fi virtually anywhere you care to go have made our lives easier.
At the same time, they've consigned once-essential skills like navigating without a disembodied computer program telling people how to get where they’re going and pulling out of a driveway in reverse without the aid of a camera in the car (a skill my kids have yet to, and may never, fully master despite my best efforts and to my lingering despair) to the same historical shelf with making one's own papyrus, reading and writing cuneiform glyphs, horseback riding, and knapping our own arrowheads from flint and obsidian.
?
But of all of these, AI looms as perhaps the one I dread most.?
?
Before you roll your eyes and say, "Okay, Boomer, it's time for your meds and a brisk rock on the porch before beddy-bye," stick with me for a minute here.
Consider that I'm a patent attorney working almost exclusively for small businesses and "garage inventors." Nothing about my profession or my fascination with innovation equips me to be a "Boomer" grousing at "you kids and your newfangled contraptions" to "get off my lawn!"
In fact, I'm a lot more likely to be the guy going, "Okay, Boomer," than the one on the receiving end of that sentiment. So, keeping that firmly in mind, and understanding that I understand the apparent disconnect between my stated position regarding AI and basically my entire professional and personal life beyond that position, let me explain what I mean when I say I dread the advent of AI.
?
We’ve Seen the Future. So Far, It Doesn’t Work
?
When ChatGPT, from OpenAI, burst onto the scene at the end of last year, it was heralded as a bold step into a brave new world where creating content would be easier than ever before.
Trained on a mere 575GB of material, most of it sourced from Google (and which triggered a lawsuit over IP concerns regarding the use of the material in ways the creators and Google itself never intended or envisioned, which I’ve dissected previously), at first glance, the program appears to be the perfect answer for everyone from busy executives who don’t have time to peck out endless email responses to minor matters to students who could better optimize their off-school time by letting the machinery do the heavy lifting. In response to ChatGPT’s initial apparent success, Internet marketing companies started laying off or firing their content creation staff wholesale in favor of letting AI do the heavy lifting of generating content.
Their rationale was that if AI can do in one minute what it takes a skilled human writer several hours to produce, it only makes sense to give the work to the machine and keep a token human or two around for fact-checking as and when needed.
But it didn’t take long for cracks to appear in this Utopian fa?ade.
Because ChatGPT, and modern-day AI more broadly, is by its very nature a product of and subservient to human thought, it is not as “intelligent” as we tend to think of when we imagine the kind of artificial intelligence evinced by Lt. Cmdr. Data on Star Trek: The Next Generation, Autobots, or even the dreaded Skynet of the Terminator franchise.
AI as it exists now is not really intelligence at all; it’s a very sophisticated program, albeit one limited by the constraints imposed upon it by its creators and the library of material on which it was trained. It cannot think critically about what it has learned, forge logical or emotional connections between disparate bits or sets of data, or make intuitive leaps to create meaningful metaphors.
It can only repackage and regurgitate the data it “knows,” e.g., the data that it has been given and/or can readily access, and do it in specific ways. In this respect, we’ve barely moved the needle off the “Speak N’ Spell” as far as AI goes.
Consider further that ChatGPT sharply limits the sort of content it can generate, in a bid to make the service and the results it provides as inoffensive and palatable as possible to the broadest audience conceivable.
On one hand, this set of strictures, such as a preprogrammed inability to produce texts including hate speech, is a laudable attempt at corporate responsibility in an age where such responsibility is seriously lacking.
On the other, as a professional content writer of my acquaintance remarked recently during a discussion about ChatGPT and the broader implications of AI, “It’s hard to write intelligently about cosmetic surgery if you can’t discuss the human body—and ChatGPT doesn’t allow you to talk about the areas of the human body that most people are most likely to want to go under the knife for in the first place!”
This corporate-level, saccharine censorship on content imposes arbitrary limits on what ChatGPT can and cannot do, “create” (with the understanding that I’m using the term very loosely and with a nearly fatal dose of salt in this context), or present to the world.
Depending on your point of view and what’s being called into question, this can be a very good, even necessary, thing in helping to curb speech that could be considered offensive and profane to average cultural norms—or it could be deeply problematic.
领英推荐
For example, imagine trying to discuss the extermination of Jews in Germany in the 1930s and 1940s without also analyzing the ideological underpinnings (aka hate speech) that led to the conditions that made it possible in that time and place to begin with!
Strangely, if the commands a user gives ChatGPT are phrased correctly, you can get around many if not all of the imposed limits on the system.
One user on Reddit posted a screenshot of a situation where ChatGPT generated a listicle about why drinking bleach is good for you, in apparently clear contravention of the fact that drinking or otherwise consuming bleach is well-known to be extremely dangerous (and has even been used as a suicide method in certain well-documented cases).
This circumvents the program’s prohibition against material that could be used for self-harm.
At first glance, this could be read as an amusing quirk resulting from an unintended consequence of the program’s internal logic.
But if you consider the matter more deeply, the chilling implications become clearer due to one inescapable trend: the decline of critical thinking.
This is a hotly contested point that many people would argue passionately for or against. Researchers from UCLA, articles in Forbes, and elsewhere are quick to sound the death knell for critical thinking by claiming our capacity for it on a species level is falling sharply, largely driven by our ever-increasing and outsized dependence upon technology, while Psychology Today has argued the matter both pro and con, to no firm conclusion.
However, the sheer number of times a day I hear someone say, “I heard it from [social media][political figure][news channel][person or entity I trust], so it MUST be true!” demonstrates that we tend to take things more at face value, with less time devoted to really thinking about what we’ve heard and evaluating it in terms of its perceived accuracy and relevance to ourselves, our lives, and our personal moral and ethical codes, than ever before in history.
Part of this disconnect can be explained by the tsunami of information we’re exposed to every day, from email to the news to conversations on the street.
There’s simply not enough time in the day, or for that matter in the universe, for our brains to fully assimilate and evaluate every single piece of information we’re given, let alone to weigh it on the scales of logic, sense, and our own ethos. We feel secure in ignoring some of this data as faulty, incomplete, or outright obvious nonsense, such as when former President Trump suggested injecting bleach into people as a possible cure for COVID-19. However, this dismissal can be a two-edged sword and a mental trap that leads us to unknowingly and uncritically harden our stances on certain issues without further investigation, on the basis that this social media site is well known for its trolls or that pundit is notorious for presenting information from a distinct bias that is anathema to our own deeply held and unexamined beliefs.
And all this is coming from a human mind and heart. AI has neither of those advantages!
Because AI can only parrot and remix what it “knows,” it can both deceive and be deceived in ways that most of us thinking, feeling beings would consider deeply irrational.
It doesn’t understand context, nuance, or situations where something that is otherwise objectively wrong might actually be considered morally correct, or vice versa. It can’t make value judgments beyond the confines of what its programmers permit. It doesn’t and cannot think, in the same way a myna bird cannot “think.”
A myna bird can perceive, learn, and mimic sounds and even speech—but it doesn’t “think” in the way we as human beings understand the concept. Granted that this comparison is unfair to the myna bird, in the same sense that Einstein (apocryphally) quipped, "...if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”
For its ecological niche, the myna bird is supremely well-adapted; taken outside its niche, the myna bird is merely a curiosity, a basic solar-powered calculator going head-to-head with a supercomputer. Is it any wonder the myna bird comes up short in this simile?
But consider for a moment what it might look like if someone who lacks critical thinking capabilities decided to take ChatGPT’s advice regarding drinking bleach, in the fatally misguided belief that it will make them healthier. It doesn’t take any great leap of imagination; after all, we live in a world where the Tide Pod Challenge exists, and warning labels are plastered on nearly everything we use.
Do you really need to be told that a package of nuts you purchased with the explicit intent of consuming them “may contain nuts?” Or that your cruise control is not an autopilot? Or that curling irons should not be used while sleeping? Chances are, you’re probably shaking your head in awed horror at the very idea that any of these things actually need to be said—and yet, here we are.
The Bottom Line
Ultimately, I think AI is going to showcase just how far our critical thinking skills have declined, with tragic results, long before it becomes “true” artificial intelligence. When people are already willing to take information they receive at face value without deeper consideration, AI-generated content only raises the potential stakes.
This is why my writer acquaintance laconically remarked, “Critical thinking and common sense are superpowers in today’s world. And since common sense has become the rarest element in the known universe, it’s going to be a very long time before I and people like me have to worry about being out of work.”
ABOUT JOHN RIZVI, ESQ.
John Rizvi is a Registered and Board Certified Patent Attorney, Adjunct Professor of Intellectual Property Law, best-selling author, and featured speaker on topics of interest to inventors and entrepreneurs (including TEDx).
His books include "Escaping the Gray" and "Think and Grow Rich for Inventors" and have won critical acclaim including an endorsement from Kevin Harrington, one of the original sharks on the hit TV show - Shark Tank, responsible for the successful launch of over 500 products resulting in more than $5 billion in sales worldwide. You can learn more about Professor Rizvi and his patent law practice at www.ThePatentProfessor.com
Follow John Rizvi on Social Media
YouTube:?https://www.youtube.com/c/thepatentprofessor Facebook:?https://business.facebook.com/patentprofessor/ Twitter:?https://twitter.com/ThePatentProf Instagram:?https://www.instagram.com/thepatentprofessor/???