ChatGPT: No Imagination Prosthesis, but rather Muse for the Gifted

ChatGPT: No Imagination Prosthesis, but rather Muse for the Gifted

Recently, Ethan Mollick has presented an interesting take on generative AI as an imaginative “prosthesis” to boost one’s creativity. He presented several interesting arguments all pointing to the possibility to increase one’s creative output based on research dating back as far as Keith Simonton’s “Equal Odds Rule” from 1977. This rule assumes that “the number of creative hits is a positive linear function of the number of attempts”, and has generally been found to be (statistically) true in several contexts, such as divergent thinking tasks.

No alt text provided for this image
Quantity & quality of creativity significantly related (r = 0.73)

Whilst all of Mollick’s points about generative AI being helpful in increasing both fluency and novelty of ideas whilst also helping to avoid inertia are making sense, a critical aspect about the quality and hence resulting value of creativity has been overlooked, especially as this creative quality is significantly depending on the respective individual.

To illustrate the point further, we can utilise the “Infinite Monkey Theorem”. The earliest documented version of this theorem appeared in émile Borel's 1913 article “Mécanique Statistique et Irréversibilité” and his 1914 book Le Hasard. The theorem, which deals with the probability theory of mathematics states that an unlimited number of monkeys, given typewriters and sufficient time, will eventually produce any particular text, such as Hamlet or even the complete works of Shakespeare.

Now it should be noted that Jesse Anderson has actually put the theorem to the test and successfully reproduced all of Shakespeare’s works by creating a computer program using the Hadoop framework to simulate a million monkeys randomly typing.

No alt text provided for this image
Illustration of the “Infinite Monkey Theorem”, (c) Jesse Anderson

But as always with complex concepts like creativity and innovation, the devil is in the details. Specifically, in the way in which Anderson has set up his virtual experiment:

In his set-up, “each virtual monkey put out random gibberish nine letters at a time. This was supposed to mimic a monkey randomly mashing the keys on a keyboard. The computer program compared each nine-letter segment to every work of Shakespeare. … This process is repeated over and over until every portion of every work of Shakespeare – all 3.7 million letters of them – has been covered by the monkey’s gibberish.”

No alt text provided for this image
“The Simpsons” scene, featuring the “Infinite Monkey Theorem”

The critical key aspect of the way Anderson set up his experiment was however related to the choice of his statistical parameters to simulate the writing activity of his monkey:

He chose for them to always write nine characters at a time, and nowhere near a whole letter or book of Shakespeare. The reason for that can be best summarised by Anderson himself, where he writes:

“Many have asked why I used nine characters and not one or some other number. There are two reasons for this decision. The first is that a 1-, 2-, or 3-character group would not be sporting. To test the correct performance of my code I actually ran all of Shakespeare through a one-character group. All of the works of Shakespeare are created in 20 seconds using this method. I doubt I would have received any kind of attention by announcing I had recreated Shakespeare in 20 seconds flat. The second reason is the scope of the work. I wanted the monkeys project to complete in 1–2 months. … For my computer, the nine-character group size was just right. Given the exponential nature of the problem, going one character up or down made a very big difference.”

This means that Anderson has in fact not proven the statistical feasibility of the Infinite Monkey Theorem, but instead re-created all of the letters in Shakespeare’s work re-appearing statistically after 46 days when using a brute-force method of generating random nine-character strings by one million virtual agents.

Pointing out this difference is also highly significant in the light of the actual improbability of high creativity of the level of Shakespeare, because, as Anderson put it:

There are about 130,000 letters in the whole of Hamlet. The chance of randomly typing out the entire play correctly at first trial therefore works out at one in 26^130,000, which is one in 3.4 × 10^183 946. A single monkey would need that kind of number of attempts before there was a reasonable chance of his or her getting it right. This would take a very long time. In fact it is a time that is many orders of magnitude longer than the life of the universe (let alone the life of the monkey). 10^183 946 is a number that is mind- bogglingly big. As a comparison, there are only about 10^80 particles in the observable universe. Even using more than one monkey does not help much. If we took the same number of monkeys as there are particles in the universe (10^80), and each typed 1000 keystrokes per second for 100 times the life of the universe (which is 10^20 seconds), we would still find the probability of the monkeys replicating even a short book to be impossibly small.

So where does this leave us with regard to the actual use of sophisticated generative AI tools such as ChatGPT? It actually follows (IMHO) that, respectfully, for many if not most users, these tools will generally be exactly what typewriters are for monkeys: production tools with only a certain (statistical) likelihood of creative success. Whereas they indeed will make a big difference for the artistically, creatively (and technically) gifted.

No alt text provided for this image
Openness to experience as moderating factor for creative quality

And in fact research seems to support this idea, specifically research by Friis-Olivarius & Christensen in “Frontiers in Psychology”, who found that

while quantity does breed quality in creative production, the effect is moderated by individual differences, specifically the personality trait Openness to Experience. Given the assertion in the Equal Odds rule that the relation between idea production and creative hits are characterized as positive, linear, stochastic and constant, then it could be argued that individual average quality of ideas should be constant and unrelated to the individual volume of ideas produced. While this was what we found for individuals low on Openness to Experience, this is not the case across the full spectrum of Openness to Experience. Indeed, with increasing levels of Openness to Experience, the average individual creative value increased sharply with ideational fluency. The results challenge the simple but broadly accepted assertion that only quantity drives ideational quality, and underscores the importance of incorporating individual differences in creative personality into models, in this case Openness to Experience.

One might only wonder what other, harder to measure, creative qualities and personality characteristics might also make a difference in quality of creative output…

No alt text provided for this image
From: "Infinite Monkeys" by Jonathan Nuttall
Natasza Holownia

Digital, Data and Technology Exec Consulting – Change Strategy Leader – Manufacturing, CPG, Retail, Utilities, Life Sciences and Mobility

1 年

That’s beautiful Thomas. There’s another dimension to that topic that I think we’re both grappling with at the back of our minds - how does communication (language, visual, symbolic, multi-stimuli etc) give and reinforce creative value…whilst in itself being its main medium. How does invention create meaning? Jesse Andersson’s “brute force” to template proof and and Simon’s Baron-Cohen’s “if and then” pattern leave us in the same place - of an invention achieved through multivariate trial and repetition, that can be recognised only through repetion of its predecessor. If invention and creativity are fuelled by openness to the experience (creation) and historically flourish in waves of shared group dialogue and subject cross-pollination, then valuable (extraordinary) ideas reinforcing each other are the most effective key to lasting invention. The most overlooked value of generative AI is its ability to collect what we collectively and individually communicate and seek to create.

Tim Wilson

CEO + Innovator of Tempo Reading (Eye Tracking, AI, Neurocognitive Entrainment) | Co-Founder/Innovator SocialAsking.com, AI, consumer engagement | Advisor to Horizontal Satellite Launcher

1 年

Look at what protools type DAWs/auto-tune did to the creativity and output of the music industry . It created a high increase in content as it reduced cost and made recording EASY for everyone, but the standard of writing progressively decreased to the point where literally one chord progression and one beat can be heard across multiple chart hits in the same week. The goal of these types of AI is not to produce outlier cutting-edge ideas but to consistently produce good b grade material that will satisfy the majority. They will then relearn off their own b-grade material in the same way the pop music industry has with ever decreasing returns. . True innovators may use AI to explore concepts but expecting AI to create ground breaking material is never going to happen with the current scrapped data feeds. Also, true innovators and quality media are going to guard their ideas from being fed into such systems. On top their will undoubtedly be ways to watermark your content to.prevent the likes of ChatGP.ripping you.off. These systems have their use, but not as authors of original groundbreaking concepts and material if they depend on human source material scrapped from the web. IMHO. Apologies for taxi typos.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了