Human vs AI Creativity: A Matter of Meaning?
Human vs AI Creativity: Equal but not the Same?

Human vs AI Creativity: A Matter of Meaning?

TL:DR // Key Insights:?

?? ???? ?? ?? ???? ??????, ????♂? ??? ?? ? ?? ??

Recent studies hint at Generative AI's potential to rival human creativity, but experts stress that AI lacks genuine understanding and insight.

???? ?????? + ??, ?? ?? ?? ????, ?? ??, ??????, ????

For effective human-AI collaboration, it's pivotal to use AI as a tool to enhance, not replace, human creativity, ensuring empathy remains at the core of the process.


Over the past weeks, more and more research has been published, suggesting that Generative AI might actually have the potential to match or even surpass humans in creativity, a trait that has previously been assumed to be a uniquely human capability (see here , here , here , here , here , and here ).?

At the same time, scholars have also raised the important point that to date there is still no one generally accepted definition of creativity much less of a generally shared understanding of how creativity actually works other than simply being the result of recombining existing concepts in a novel and useful way - which is obviously where LLMs come into play which are “good at this, acting as connection machines between unexpected concepts ”.??

This is why it might make sense to have a more in-depth look at the creative process itself to better understand whether Generative AI actually has the potential to be creative in a true and fully informed sense of the concept of creativity - and where this might leave us humans when it comes to delivering creative and innovative work.?

First it might be helpful to illustrate some of the most critical differences between the way human and artificial neural networks work when it comes to generating novel output: Although both human and machine “intelligence” is ultimately based on identifying the most relevant patterns stored in the meta-representations of (physical or virtual) neural networks, they actually operate very differently: Whereas artificial intelligence systems are relying on huge amounts of relevant training data to help them learn how all semantic entities in the data are related to each other, we humans with our comparably limited data bandwidth are relying on something else entirely: We learn by compressing all the rich perceptual and experiential data into practically and behaviourally relevant patterns of “meaning”. This difference in learning has huge implications for the interpretation of any possible output of these two very different types of “intelligence”, which are best explained by professionals in the field of neuroscience and linguistics:

Maria Balaet , a neuroscientist at Imperial College London hence explains that

where AI ‘creativity’ is different from human creativity is that, as humans, we understand the meaning of each of the components we’re combining. AI doesn’t. If it has three different categories, let’s say A, B and C, it can determine how they all relate to one another in different ways. But it doesn’t know what A, B or C mean to begin with.”?(Source: Alexander Beiner )

Emily Bender , a professor in the department of Linguistics at the University of Washington and the faculty director of their Computational Linguistics Master’s program brings it even more to the point:

Languages are symbolic systems” consisting of? “pairs of form and meaning. Something trained only on form is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.” (Source: Karawynn Long )

This means that “no LLM transformer can ever possibly develop the faintest comprehension of the meaning behind the words it produces. … There is no mind, no self-awareness, no sentience, no sapience, no comprehension, no subjective experience or independent agency at work in an LLM, and there never will be. The autocomplete may get spicier, but it’s still just autocomplete.” (Source: Karawynn Long )

The ensuing question from this sobering insight is of course: Should we hence really call any, however sophisticated, autocomplete recombinations of concepts generated by LLMs “creative”, simply because our current “flawed tests ” of creativity identify these results as such? Or do we not owe it to ourselves to dig a bit deeper into actually understanding what happens within us when we are being creative?

When we start doing that, we are actually able to quickly understand that most of the above cited research about AI systems surpassing us humans in creativity itself is quite flawed and suffers from confusing the ability to prompt LLMs to come up with novel and surprising output with true creativity. The latter always requires a creative spark, a true moment of actual insight, leading to a uniquely new and also practically useful “solution” to a complex challenge. And because only humans have the capacity to actually understand meaning, it is also only us humans who can have these uniquely creative “aha! moments ” - which means that people calling the output of Generative AI systems “creative” are in fact projecting their own creative interpretation of this output onto systems which are genuinely unable to produce anything that is actually meaningful and are hence lacking any creative spark in the first place.?

But what does this mean for our relationship as naturally creative human beings when it comes to creative collaborations with AI? Whilst Generative AI systems ultimately can’t experience true and meaningful moments of creative insight, they can still help us have these moments more often and potentially get there faster. Because they might be able to serve us as supernatural “guide dogs”, guiding us through our own vast, subconscious net of neural connections representing all of the myriads of experiences we have made throughout our life.??

Generative AI as informational “guide dog” (creativity enhancing sniffer dog). Credits: The author + Midjourney

Building on this metaphor of Generative AI as an informational “guide dog” as well as expanding on a previously published concept of creativity as energy-optimising neural network , I like to propose a new measure of utility for AI systems. Rather than equating the value of AI to a mere increase in efficiency or productivity, and in stark contrast to scholars who conceptualise prompted output of Generative AI already as creative in itself, I am suggesting that we should recognise the actual potential and true value of Generative AI within its capability to guide us towards unlocking in ourselves the most meaningful patterns of creative insight, which also holds the greatest transformational potential for truly improving the world and social context around us.

Technically, such an AI-driven increase in human creativity can be explained as a result of the ability of LLMs to help us activate the right patterns in our own subconscious, experience based neural network, effectively increasing our chance to bridge the gap between the logical, articulate side of our brain (Kahneman’s “System 2”) and our brain’s more intuitive and autonomous way of functioning (Kahneman’s “System 1”). The surprising insight here is that this gap itself is actually the result of an ingeniously energy efficient way in which our brain’s structure has evolved to avoid this most complex, power-hungry information processing unit on this planet to suddenly run out of energy .?

On a more concrete level this means that, to save energy, our brain is naturally ever only activating as much of its vast, approximately 100 billion neurons encompassing network as is actually required to solve the respective cognitive challenge at hand. Faced with a new, complex problem, for which we actually also need a completely novel, creative solution, the brain now uses the following “trick”: It first uses “System 2” to completely and coherently articulate the respective challenge, but then hands the problem over to the way more energy efficient and far-reaching “System 1” to look for any solution-relevant patterns which might be already existing in its non-declarative, subconscious neural network comprising of all the previously stored experiences.?

And it is precisely here where Generative AI might have a significant positive impact on the cognitive processes essential to unlocking true creativity, because…

Understanding how Generative AI fits into the energy- and attention-constrained processes of our creative human brain can of course now help us build much more effective settings and opportunities to augment our existing endeavours to produce novel, original and ultimately even highly useful and hence innovative solutions to complex challenges.

Novelty & appropriateness of creative ideas (Source:

Here are 3 concrete ways you can use Generative AI systems to enhance your creativity:

  1. Don’t start your creative process by simply asking AI systems for original ideas or solutions. You should rather start at the natural beginning of any creative process, namely by asking for help in clearly articulating the actual problem at hand. Missing out on this step not only reduces the practical relevance of your AI-generated ideas but you also deprive yourself of the opportunity to let AI help you activate your own subconscious solution space, meaning you are effectively cutting yourself out of the creative process.
  2. Only once equipped with an actually accurate, comprehensive and consistent understanding of the challenge at hand, you are able to take full advantage of AI to explore the intersection between your and the collective solution space. For example by asking bespoke, pre-trained or intelligently pre-prompted (primed) LLM systems for routes to potential solutions based on specific features and patterns you have already identified as being relevant to actually solving the problem. Of course you were only able to identify these patterns as a result of your own subconscious (creative) neural network having given you clues in the form of first ideas or rough, emotionally led directions.
  3. Finally, don’t stop at these first, not yet fully formed ideas but use the AI-enhanced, expanded solution space to help you trigger new, uniquely own ideas in yourself. This can only happen when you allow this hybrid, human-machine co-creation process to actually trigger true emotional responses in you, based on your uniquely human ability to empathise with everyone else potentially finding themselves in the same situation when facing this challenge. Of course this completely alters the perspective on the actual role and function of Generative AI: From a truly creative perspective, it is not just a tool for efficiently generating more random (irrelevant) ideas, as commercial applications like Jasper , Copy.ai or Rytr are implicitly suggesting. At a deeper, actually meaningful level AI becomes “a mirror in reflecting on our uniquely human, self-determining, and creative selves ” in the way that it not only allows us to better connect with the pre-existing solution-relevant patterns we already hold within our experience-formed neural networks, but it also allows us to emotionally connect with everyone else and their expressed experiences in similar situations as captured in the data LLMs have been trained on. Using this unique opportunity to connect, both with us and with everyone around us, is ultimately the essential precondition for refining any abstract, not yet fully formed idea into a practically relevant and highly useful innovative solution.

This not just underscores the importance of the role humans will continue to have in any future AI-augmented creative processes, but it also shows us a clear path for how we should educate, train and up-skill anyone involved in generating creative work in the era of Generative AI:

It all comes down to helping us better understand ourselves as human beings. And from there, to understand how we can better connect with other human beings and the rest of the living planet around us. In order to accomplish that, we need all the help we can get, which is why this is what future AI development should mainly be focused on: To create AI systems that can help us develop the relevant meta-cognitive, emotional and reasoning skills to truly understand ourselves so that tapping into our uniquely human ability to empathise and connect with others based on shared human values can become the foundation of everything we do and create. Or with the words of Ethan Hawke (thanks for sharing this Lia Mkhitaryan ):

"If you want to help your community, you have to express yourself, and to express yourself, you have to know yourself."

Please let me know if you enjoyed these ideas by subscribing to my "My AI & I" Substack: CoreCortex.ai


Natalia Bielczyk, PhD

Neuroscientist turned Career & Business Strategist ? The Future of Work ? Let's Find Your Competitive Advantage & Navigate in AI Era Together ? Bootstrapping Solopreneur ? Author ? Speaker ? Founder @ Ontology of Value?

7 个月

Very interesting article! Thermodynamic view at creativity in neural networks - I love it! My Masters thesis in Psychometrics was dedicated to testing creativity; my supervisor had a vision of creating one culture-free task, a parallel of the Raven's Matrix test for IQ, to measure divergent thinking also known as creativity. Interestingly, our attempts failed as we could not properly standardize the test: our results were bimodal. It seemed that 80-90% of the subjects were extremely non-creative (achieving 1-4 points in the test) while the remaining 10-20% were extremely creative (scoring 20 points or more). There was no middle ground.

Allie Miklasova

Coach ∞ Psychologist | Brand and Content | Upheal ?

1 年

Thanks, I enjoyed this.

Paul Smith

Product Designer ? Generative AI

1 年

Yes, it can. Yet, while there are teams dedicated to improving AI’s accuracy, I have yet to hear of any dedicated to improving its creativity. (Perhaps there are?) That’s needed because creativity is the essence of its power and appeal.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了