Generative: A dance of minds
Dinand Tinholt
Enabling data-powered transformation | Data & Analytics | Artificial Intelligence | Data Strategy & -Governance
In a study published by Eloundou et al. it is predicted that 80% of the U.S. workforce will have at least 10% of their work affected by #AI . This impact is increasingly impacting white collar jobs - the work of knowledge workers. The higher the education requirements for a job, the bigger the impact #generativeai will have.
I'm an optimist in this respect and think this is good as technology augments humans instead of replacing them. To understand why, a small journey into the delicate balance between generative AI, mind, consciousness, and intelligence is useful, the theory of mind and its implications for artificial intelligence systems like GPT.
First, some background for the uninitiated: the theory of mind refers to the human ability to understand that others have thoughts, feelings, motivations, and perspectives that are different from our own. A baby laughing when mom makes a silly face, a child lying to avoid punishment, an adult tactfully addressing a delicate subject – these are all manifestations of our understanding of others' minds.
For years, we’ve seen AI developers endeavor to imbue their creations with a similar ability. But here's the zinger: can a machine really comprehend the subjective experiences of others when it lacks its own subjective experience?
Generative Pretrained Transformers, like GPT, are a game-changer in the field of AI. Their training on vast amounts of text enables them to produce human-like text. But let's be clear - GPT, even in its 4th iteration, doesn’t have a theory of mind. It doesn’t understand the content it generates. Instead, it uses patterns in the data it was trained on to guess what comes next in a sequence of text.
However, this doesn't mean that the theory of mind is irrelevant to GPT. On the contrary, the connection is profound and its implications are far-reaching.
领英推荐
First, GPT's language generation abilities have been used to create convincingly human-like text, often leading users to anthropomorphize it. This happens due to our theory of mind - we instinctively assign mental states to entities that don't have them. Although we know that GPT doesn't truly understand, it’s tempting to think that it does because it can produce output that feels eerily 'understanding'. This misattribution has significant implications, particularly when it comes to our interactions with and expectations from AI.
Second, GPT's ability to generate coherent and contextually relevant responses hinges on our own application of the theory of mind. When we speak to an AI, we treat it as though it has a mind, imparting our knowledge, perspectives, and intent onto it. Without this treatment, GPT wouldn't work as intended. It is our collective theory of mind that enables GPT to 'appear' intelligent.
Finally, let’s tackle the elephant in the room: should we try to create AI that truly has a theory of mind? While it's an enticing prospect, it’s also fraught with ethical conundrums. If an AI truly understands others’ perspectives, would it also have its own perspectives? Could it experience consciousness, or even suffering? And if it could, what would our moral and legal obligations to such a being be?
As we continue to leverage AI technology, the theory of mind will play a crucial role in shaping our approach. Whether or not we should ever aim for an AI with a genuine theory of mind remains a contentious issue. But one thing is clear: As AI grows more sophisticated, our understanding of mind, consciousness, and intelligence will be continually challenged and refined.
In the meantime, let's remember this: GPT is a tool, not a mind. It’s on us to use it wisely, responsibly, and with a clear understanding of its limitations. After all, we're the ones with the minds here.
Great article Dinand! Thanks for sharing this.