The Death of Creativity and the Rise of ChatGPT
Wayne J. Keeley
Entertainment Attorney, Lecturer, Emmy Award-Winning Producer, Director & Writer, Playwright & Screenwriter
?By Wayne J. Keeley & Stephanie C. Lyons-Keeley
Am I the only sane and rational person left in the flat earth world? I feel like Will Smith’s character in I Am Legend. Perhaps that’s not the best example since Will remains on the fringes of cancel culture since his temper tantrum at the 94th Annual Academy Awards last year. Perhaps a better and more relevant example might be Pedro Pascal’s character in The Last of Us. Pedro has a higher IMDb STARmeter rating than Will anyway—at least for the next fifteen minutes.
But I digress. You’re probably wondering what I’m talking about.
It’s something that Rod Serling, Arthur C. Clarke, Ray Bradbury, and countless other visionaries warned us about decades, even centuries ago (e.g., Leonardo da Vinci). Artificial Intelligence. AI for short. The name sounds innocuous, but one should not be fooled by its apparent simplicity. It is the catfish of all catfish. The deadliest clickbait imaginable.
As Val Kilmer’s immortal portrayal of the legendary Doc Holliday said in Tombstone, “My hypocrisy only goes so far.” And I am almost ashamed to admit that so does mine. Like most people, I flick on a light switch and have no inkling of the myriad technical processes that transpire to ultimately illuminate the filament in the bulb. I just expect it to work. Similarly, I use Siri, face recognition, voice assists, Google Home, computers, GPS, etc., in all of which AI is an integral part. I know AI works with quantum physics and, like turning on a light switch, I have no idea of the interplay of technology that occurs. Although if truth be told, I did read up on quantum physics (Quantum Physics for Idiots) but still have no idea whether Schrodinger’s cat is alive or dead. I just hope that if I ever have an opportunity to open the box it is not a zombified feline that goes feral on me and gives me cat-scratch fever, which is a real disease, or worse.
But like the baby boomer masses (who are as old as I am) or for true cinephiles, AI was the stuff of Irwin Allen’s TV shows back in the 60s and 70s. Little did we understand then that the robot in his Lost in Space TV Series was not only warning Dr. John Robinson and his family of the danger ahead, but the dangers for all of humanity as well. And few of us, if any, blanched at HAL (Heuristically Programmed ALgorithmic computer—thank goodness for Wikipedia!) in Arthur C. Clarke’s incredibly prophetic 1968 film, 2001: A Space Odyssey.
I think the first time I blinked was in 1997 when IBM’s Deep Blue computer beat reigning grandmaster chess champion Garry Kasparov. At first, I just figured Kasparov was not good enough. As a chess aficionado myself, there was and will always be one supreme grandmaster—Bobby Fischer. No one, certainly, no human designed machine, could beat the genius of Bobby Fischer. Of course, we will never know the answer since there was never a match between Fischer and a computer. So, my blink was a minor twitch at best.
But Kasparov’s defeat turned out not to be a fluke or a deficiency of skill or brainpower, but rather the first link in the manufacture of a long and ponderous chain that would impress even Dickens’ Jacob Marley. Indeed, from the mid-2000s, supercomputers have beat the proverbial pants off of human chess grandmasters. In fact, there are now chess tournaments between supercomputers sans man entirely (or to be politically correct particularly in view of Queen’s Gambit) sans humans. One person even posed a question on social media as to the viability of the goal of becoming a chess grandmaster when supercomputers dominate the field?
It was then that I started to connect the dots regarding the gradual evolution and the power of AI. But as Steve Jobs said in his famous commencement speech at Stanford in 2005, “You can’t connect the dots looking forward; you can only connect them looking backwards.”
And the dots have started to connect for me. Countless scenes from sci-fi and even mainstream films have passed through my brain like a verge-of-death life review. Apart from obvious ones such as The Matrix, Blade Runner, and The Terminator franchises, there have also been new ones like Her, Upgrade, M3Gan and a plethora of others. I’ve realized that while AI has been around for decades, it has remained thoroughly ensconced within the relatively narrow confines of the science fiction genre.
Then I dug into the dim recesses of my childhood memories only to find an animated cartoon about the legendary John Henry. Henry was a former slave who went on to wield his Thor-like hammer, driving spikes in the construction of railroad tracks. He gained fame and notoriety for his strength and precision until he met Inky Poo. According to legend, Inky Poo was a steam engine that (like chess supercomputers) would replace human labor via automated placement of the railroad spikes. In a bid that no machine might beat a human, and to allay railroad workers’ fears of losing their jobs, John Henry pits himself against Inky Poo.
领英推荐
In a thunderous showdown akin to a Marvel superhero battle (sorry DC fans, but Marvel has the best fight scenes), John Henry uses every bit of his skill, brains, and brawn to beat Inky Poo, but does so at a tremendous cost—his life. What is interesting is that I’ve searched Google, Vimeo, and YouTube, among others, trying to find that cartoon but I could not. Like the character Joe in the Bruce Willis film, Looper, the cartoon seems to have been erased from existence, a fate which almost happened to the character Marty McFly and his family in Back to the Future. What I did find, however, was an updated animated cartoon of John Henry competing with Inky Poo. Only in this version, John Henry does not die. Instead, he succumbs to the overall power of the steam engine and ends up jumping on it, wherein they continue to construct scores of railroad tracks together. Talk about revisionist history!
Of course, the nine-inch nail for me (or was it more of an epiphany?) is that AI is not only firmly embedded in our society, but it represents a danger to humankind. Just ask any former McDonald’s workers who’ve lost their jobs to the dreaded self-ordering kiosk; or a grocery store clerk who is searching for work since the advent of self-checkouts. This epiphany of mine came with the debut of ChatGPT. The model developed by OpenAI is, according to the website, “a model…which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
In simpler terms, what this means is that ChatGPT can create, much like a 3D printer, products which have direct, applicational uses in the real world. With respect to ChatGPT, we are speaking, among other things, about conversational dialogue and formatted literary uses. For example, I have heard that writers can input words, phrases, and descriptions which then output fully formulated story treatments, essays, scripts, etc. A great mainstream example of the creative power of ChatGPT was demonstrated by actor/entrepreneur Ryan Reynolds who used it to write and create a commercial for his Mint Mobile enterprise. Imagine if Tom Hanks had ChatGPT with him in Cast Away instead of Wilson, the mute volleyball, he might not nearly have gone crazy.
Now that same question that wannabe chess masters have been asking hits me in the face like a Mike Tyson punch—what’s the use in trying to be creative anymore? Why spend hours upon hours brainstorming ideas for a script that may or may not engage an audience? Just punch a couple of phrases into ChatGPT and voila! You’re done! An algorithmic masterpiece without the expenditure of any brain cells! As a screenwriter this plagues me. But I’m also a college professor. In academia there are growing concerns about the use of ChatGPT by students who are often already looking for shortcuts.
How far do we have to go before we are hooked-up to Matrix-like devices and our actions come through avatars, such as those in Surrogates, another Bruce Willis film? Perhaps we are closer than we think. Elon Musk’s Neuralink has demonstrated that implantable chips can allow computers to interface with human brains.
Is the next step disembodied immortality? What if the contents of our brains—thoughts, feelings, memories, and emotions--can all be downloaded onto computer chips? It’s hard to believe that such a scenario was explored in a 1968 Star Trek episode titled “The Gamesters of Triskelion.” In it, three disembodied entities that looked a great deal like human brains, gambled non-existent money on human gladiators whom they were able to control using mindpower. Will we reach a point where technology will ultimately mean the destruction of humankind? And will there be a chance for starting over from scratch like the cautionary post-apocalyptic 1959 novel, A Canticle for Liebowitz by Walter Miller?
I have not tried ChatGPT—but can tell you I will. I would also bet my life on the possibility that Elon Musk has already implanted a Neuralink in his own brain. If I were Musk, I would play Dr. Frankenstein and experiment on myself. How do I know this? It is simply human nature.
I take back what I said about hypocrisy only going so far. Despite it all, unlike Doc Holliday, my hypocrisy knows no bounds.
In fact, this may be the last thing I write without the aid of ChatGPT.