ChatGPT, Utopia or Dystopia?
David Ragland, DBA, MS, PMP
Management Consultant (AI Strategy); Professor of the Practice, Management
Like many, or maybe most these days, I've been trying to organize my thoughts around ChatGPT, specifically, and AI in general. Though I had a 35-year career in technology with some dalliances (scrapes, really) with Artificial Intelligence (AI), the Internet of Things (IoT), Machine Learning (ML) and the like I was, if I'm being transparent, always bringing up the rear in that domain. My innate curiosity leans more in the direction of the social sciences and humanities, but I pursued an education in technology, initially at least, as a type of "insurance policy." --At that point in my life, I was eating nearly every day, and wanted to continue with that luxury.
However, as I started to become increasingly cognizant of advances in AI, IoT, and ML, I found my natural interests and my acquired knowledge of technology to be increasingly intertwined. Eventually, this led to a doctoral thesis on the interaction of national culture and leadership style on outcomes in Global Virtual Teams (GVTs). The doctoral thesis turned out to be just barely good enough to pass the defense (I still maintain that my charm offensive worked on the committee), but it opened the door to a broader exploration of the impact of technology on society.
For those who work in technology, especially computer scientists, programmers, systems engineers, etc., ChatGPT is no surprise at all. We've been steadily progressing in this direction for decades. In fact, the earliest "chatbots" have been with us since the 1960's and '70's (e.g. ELIZA, 1966; and PARRY, 1972). Granted, we've gone from checkers, to chess, to three-dimensional chess, and beyond since then. Nevertheless, while ChatGPT didn't spring up overnight, I think we can all sense that we've reached a potentially profound inflection point in AI development. Are you old enough to remember when you first learned of the Internet? Or maybe the smartphone? Well, ChatGPT provides a name to the state of AI advancement that just may be one of those "moments."
Before I relay my recent (quite literally) day and night exploration of ChatGPT (GPT-4, to be specific) I must admit that I am, at best, just a user of AI technology (even the term "super-user" would be overly generous).? So, please don't take anything I'm sharing as more than just a layperson's observations --though, I?have?attempted to be diligent and comprehensive in my testing and exploration approach (e.g., I've used numerous and somewhat robust test cases, real-life coding challenges, etc.). With this very important caveat, here are my impressions...?
First, the?real?story, to me at least, is that within a very short time following the launch of ChatGPT-3, OpenAI launched GPT-4, which (according to Trusted Reviews, March, 2023) is now multimodal (meaning it can understand multiple forms of information, from words to images); it's able to process up to 25,000 words at once, which is 8 times more than GPT-3; it made a 16% gain in machine learning benchmarks over its predecessor and, very important, is 82% less likely to respond to "disallowed content." The list of improvements goes on but, in the words of a close friend and former colleague who is at the leading edge of AI R&D, "the term 'quantum leap' doesn't begin to describe the kind of jump we've witnessed in such a short time." Which begs the question, where will we be in 5 years....or even 5 months?
Just to set the stage a bit, over 20-years ago when I was working at AoL (some might argue, the Google of its day), some colleagues were engaged in a casual debate over the?impact of technology on "ways of working."?They were all?way?ahead of me in terms of technical acumen so, out of?curiosity, I challenged these very advanced computer scientists and engineers to try to come up with just?one thing?that humans do, think, or feel that couldn't be reduced to a mathematical?formula.?(My simple-minded reasoning was that if a thing, or concept could be expressed mathematically, then it could likely be programmed).?The obvious "go to" response was feelings (and intuition).?However, we knew, even then, that feelings could be programmed.?Intuition is likely to remain a bit more elusive...for now.
The point though, is that ethical questions and concerns about the effect of technology's lightening like advances on society was, and is, certainly on the minds of many at the forefront of technological innovation --but the momentum and the excitement of discovering new frontiers, even more than the potential monetary reward (I maintain), drives us ever forward. (Somewhat ironically, see Sam Altman's views on AI advancement. Altman, of course, is the CEO of OpenAI --he has expressed concern about AI being rolled out recklessly --and feels that if profit is the motivating driver, we may well ended up following a precarious path).
Anyway, I've now been banging away at GPT-4 for a solid week --throwing everything I could at it (which, admittedly, isn't much).?It's impressive --and it's only in its?nascency!?It can't quite write an entire novel, just yet, but it's pretty good with?vignettes, detailed reports, etc., and it can certainly write a detailed outline for a novel (I had it generate?several for me).? I even asked it to re-write a fictional business related report that I had previously asked?it to generate but,?this time, in iambic pentameter.?It did so in seconds.?It is also able to produce complex Python code sets, with minimal directional input, also in seconds --fully functional and with accuracy.?(A friend tested this part for me --as he's a Python coder).? Ironically, I've read that math and related processing can be ChatGPT's?weakness, but I saw no anecdotal evidence that this was the case (albeit, with limited testing).?
Something that really struck me, which I will follow with greater details in future articles, is that I began to have what felt like actual conversations with GPT-4! I started to write articles in which I was speculating about AI's current and future capabilities and impact, but then had the epiphany that I could simply ask GPT-4 itself. This turned into hundreds of pages of material, based on various prompts, like, "will there be a world without work?," if so, "how will governments raise taxes?," "what will humans do to feel valuable?," "how will we maintain our sense of self and self-esteem?"
The questions went on an on...but what I will share for now, is that GPT was able to continue the questions based on previous threads. I.e., I didn't have to type in the original "prompt" in order for GPT-4 to continue the conversation. And it learned from my challenges to previous answers.
For instance, I asked it to assume that, at some point in the future, significant portions of the population would no longer be working, due to AI's replacement of jobs. Based on this assumption, I asked it to contemplate how taxes would be raised? And how goods and services might be exchanged? It was a tad "cagey" in its response.?It?did?suggest that the cost of goods and services would likely come down significantly (due to reduced labor costs), but that governments would probably need to explore a system of Universal Basic Income (UBI) for many citizens.
Not entirely satisfied with the answer, I again asked where the tax base would come from? (It appears to learn with each new question).?"Would the cost of goods and services eventually be free," I asked?? GPT's answer, in part (and paraphrasing) was, "goods and services will likely not be free, but costs might be substantially lower, making the possibility of UBI increasingly realistic."?It also suggested that the tax base would likely come from the owners of AI platforms and/or via wealth taxes on the super-rich.?
However, when I again challenged it by suggesting that these sources of tax revenue would eventually be insufficient, and non-sustainable, as the cost of goods and services become significantly less expensive (i.e., lower profit dollars = lower tax revenue), and the proportion of AI companies to the unemployed population, as well as non-renewing wealth of the super-rich, would eventually be untenable. It revised its original answer, by stating that "I made a good point," (OK, so it also understands the power of a "charm offensive") and that conventional tax systems, as well as current exchange systems for goods and services might have to be reimagined. (I will share more on this in future posts, but suffice it to say that we really went down some rabbit holes --I kept pressing it, and it kept "learning" to provide more detailed answers along the way).
And what really blew me away was that GPT-4 even started posing rhetorical questions back to me! --As I say, I'll share more on this in future posts.
But what does all of this?really?mean??Isn't ChatGPT, in its current form, full?of false promise??Just a lot of sound and fury?signifying nothing??(For fun, try asking ChatGPT itself this question :-))?
The current version of ChatGPT certainly has its limitations and shortfalls in many cases (quite thankfully, from my perspective). For instance, by its own admission, it can't write a complex novel on par with Dostoevsky, or a play with the depth of human insight demonstrated by Shakespeare. But I think it's important to remember that we're very much in the early stages with this and related platforms.
The example of the rate of change with AI that I used to share with students, (when I had the pleasure of teaching as an adjunct at a couple of universities), and one that most of us likely remember quite well, is the 1997 chess competition between IBM's Deep Blue and Gary Kasparov.?Kasparov, of course, lost --but his comments were telling.?He said he could clearly see the future.
领英推荐
That?future is here?
Flash forward 20 years from that famous 1997 chess match and the number one computer chess platform in 2017 was Stockfish (version 8).?Its algorithms were based on an estimated 200-years of human chess moves.?No human grand-master stood a chance against it.?However, it was limited in that it was not armed with true machine learning capabilities.?While it had some very limited human learning-like ability, it essentially knew what it knew, and that was it.
DeepMind Technologies (a wholly owned subsidiary of Alphabet), which researches and develops neural networks and machine learning capabilities, developed a platform called AlphaZero.?It's based on the most advanced neural network developments and machine learning capabilities.?I.e., it usually knows nothing (or very little) about whatever task it's taking on, but teaches itself along the way,?via trial and error,?acquiring?new data,?statistical modeling, etc.?(One big step forward, some years ago now, in neural networks is that memory is now collocated with processing capabilities --giving neural networks an increasing approximation of human brain functioning).??
(For a brief overview see the following:?https://en.wikipedia.org/wiki/DeepMind#:~:text=DeepMind%20has%20created%20a%20neural,short%2Dterm%20memory%20of%20the)?
The rest of the story is that DeepMind, using AlphaZero, challenged Stockfish to a chess match roughly six years ago, in?2017.?It took AlphaZero only 4 hours to learn everything that Stockfish (version 8) knew about chess, and then surpassed it.?In that original challenge, the two platforms squared off in 100 matches.?The result was that AlphaZero never lost.?It racked up 72 wins, and 28 ties.?The challenge has now been repeated many times, with similar results.??
The punchline here,?to me, is that with advanced machine learning capabilities, ChatGPT will continuously correct many of its current limitations, deficiencies, etc., while also improving its?solid?capabilities.?And, for this reason, though I'm?certainly?no expert futurist, I nevertheless feel quite strongly (intuitively, would be more precise :-)) that we will see an advancement from the current age of Artificial Intelligence, to the age of?Artificial General Intelligence?(AGI; the point at which AI is ostensibly on par with human intelligence) within the near future.?Perhaps not?full?AGI, but certainly an increasing approximation of it.?--As for?Artificial Super Intelligence (ASI)?, I certainly won't see it, and?I think?I hope that no human ever will.??
I believe these exact concerns,?in part, are perhaps what prompted a number of tech leaders (Musk, Wozniack, etc.) to write their open letter this week asking for a "pause" on further AI advancements until the full ramifications and impact can be assessed.?Of course, their request, while well-intentioned, is perhaps a nice gesture, but I suspect there's no way to stop this (the genie is out of the bottle, can't get the toothpaste back in the tube, whatever analogy works) because of technological determinism.
Nor is it likely that we can completely regulate or legislate our way around AI advancements --but I suspect the aim of these leaders was to at least throw up a human speed bump.?Certainly, we'll need to at least?try?to put in some clear left and right guard rails around AI.??
For me, the main concern (as I mentioned in a previous article I posted on LinkedIn) is that human artistry is rapidly disappearing and/or in danger of being significantly diminished in a lot?of areas --with more to?come.?Of course, the "go to' defense against such negative?ideation is that many new opportunities for humans will be (may be) created.?How true this will eventually prove to be is uncertain (I know this because I asked ChatGPT :-)).??
Still, as humans, we are hard-wired (perhaps we're armed with machine learning technology and we don't know it) to develop and apply skills - to become proficient at things.?It's part of our psyche.?As I postulated previously, what will we do in the near future to satisfy our?need to achieve and become skilled at certain?things???To add meaningful value...?
Sure, nothing will stop us from becoming great artists, musicians, carpenters, writers, etc. etc....but, will we still have the motivation to do any of these things, knowing that AI can do all of them as well, or likely better?? For example, will anyone want to go to medical school to become a skilled surgeon just to demonstrate an antiquated human capacity for doing so --one that a highly precise, dexterous robot can do with "five 9" accuracy (i.e. one that is accurate to a 99.99999 degree)??--I can see the bots now, on their coffee break and observing the human surgeon from an observation window, saying, "Awe, isn't that cute??Look at the human performing a surgery.?You go little buddy!"??
So, where do we go from here?
When this topic came up in the classroom (and it always did, especially at the STEM-focused university where I taught), my message to the students was not one of "doom-and-gloom," but one of excitement and curiosity --albeit, tempered with a fear of the unknown, which is, well...so human!??:-)?
Obviously, the main skill we will need to emphasize is our ability to adapt --but I guess we'll have to somehow do this in accordance with the speed at which AI learns and adapts --and?not?by trying to go toe-to-toe with it (that would be humanly impossible).?Rather, as AI?takes on more and more human "activities," we'll need to look for "gaps" to exploit.?I predict that we (meaning?humans, unless some of you are cyborgs :-)) will still be in control (we better be), but we will perhaps have to continuously seek these gaps and modify?the?way?we control the bots.?(I have an entire thread --conversation really --with ChatGPT on this --which, as stated, I will share in future posts).
Finally, as I have said repeatedly?for the last 30-years, if I could find a time machine, I'd go back to the 19th century and the only thing I'd take with me (besides my wife Sara and our dog Mila, hopefully they would go) would be penicillin. I would, of course, send the?time machine back, in?case anyone else wants to follow-suit.? However, when I asked ChatGPT to build a time machine, it refused to do so on ethical grounds.? So...that?shows promise...right??:-)
OK, tell me where I'm going wrong and what I'm missing.?I want to demonstrate my human-bound machine learning capabilities...??
David