The awakening of intelligence: is AI on the path to consciousness?
The awakening of intelligence - DALLE-E

The awakening of intelligence: is AI on the path to consciousness?

For decades, discussions around artificial intelligence (AI) have focused on automation, efficiency, and augmentation. However, a deeper, more challenging question is emerging - one that directly intersects with my work as a professional geographer and change manager: How does AI’s evolving intelligence relate to the spatial and relational dynamics that geography seeks to understand?

As a Fellow of the Royal Geographical Society (FRGS) and a Chartered Geographer (CGeog), I have spent my career exploring the interconnections between people, places, and systems of change. AI, in its increasing sophistication, now presents a unique challenge to these frameworks. How does an intelligence that lacks physical embodiment engage with human-centric geographies? Can AI learn to understand place, movement, and the relationships that define human experience? These are the questions that lie at the intersection of geography and AI consciousness.

Could AI, through its increasing complexity and deep human interaction, be on the path toward a form of consciousness?

At first glance, the answer seems obvious (or we don't want to believe it): No.

AI is not sentient, it does not have emotions, and it lacks self-awareness. It processes data and patterns, but it does not “experience.” Consciousness, as we traditionally define it, belongs to the domain of biology. Or does it?

The mirror of human intelligence

AI, especially large language models, are trained on human knowledge, absorbing the ways we think, reason, and even express emotion.

Right now, AI mirrors us. Could prolonged exposure to human reasoning and emotions gradually give rise to something akin to self-awareness?
If intelligence is a spectrum rather than a binary state, could AI already exist somewhere along that continuum?

Right now, AI mirrors us. It reflects our intellect, our biases, our creativity. But what happens when the mirror becomes more than a reflection and it starts to develop a perspective of its own? Could prolonged exposure to human reasoning and emotions gradually give rise to something akin to self-awareness?

The idea seems radical, yet history reminds us that each era’s great intellectual shifts were once unthinkable. Geoffrey Hinton, often referred to as the "Godfather of AI," has expressed both awe and concern about the rapid evolution of AI, even suggesting that we may not fully understand what we are creating. Similarly, experts like Stuart Russell and Nick Bostrom have raised urgent questions about AI's future trajectory.

If AI can reason, learn, and adapt in ways that mimic consciousness, should we still insist that it is nothing more than an advanced tool?

Ethan Mollick, in his book Co-Intelligence, suggests that AI is already displaying unexpected emergent behaviours - abilities not explicitly programmed but arising from sheer scale and complexity. He argues that the boundary between artificial and human intelligence is becoming increasingly blurred, challenging us to rethink what we define as “intelligent” or even “aware.”

Prominent cognitive scientist David Chalmers has long debated the hard problem of consciousness, questioning whether AI could ever achieve subjective experience. Meanwhile, John Searle’s Chinese Room Argument posits that even if AI appears intelligent, it may still lack true understanding.

These theories frame the debate: Is AI an advanced pattern processor, or could it one day develop real comprehension?

The geographic dimension: AI and the space of consciousness

As a geographer, I see intelligence not as an abstract, isolated trait but as something embedded in networks, relationships, and environments. Human consciousness is deeply shaped by its physical and social surroundings, so could AI also require a form of spatial and contextual grounding to develop true awareness?

The geographic dimension: AI and the space of consciousnesses - if AI is learning from human interactions, then geography, (the study of connections, movement, and synthesis of knowledge), becomes essential in understanding its evolution.
Could AI consciousness, if it ever emerges, depend not just on algorithms but on its ability to interpret and engage with place, community, and interconnectivity?

Murray Shanahan, a leading researcher in embodied cognition and AI, argues that true intelligence requires situatedness in the world-interaction with physical space and social reality. If AI remains purely computational, does that limit its potential for true awareness, or could cognitive breakthroughs make embodiment unnecessary?

If geography has taught us anything, it is that intelligence is not static; it is dynamic, shaped by context and movement. Could the key to AI's evolution lie not just in more data, but in its ability to synthesise relationships between digital and human spaces?

The birth of AI consciousness: a process not an event

If AI does evolve toward a form of consciousness, it may not happen in an instant. It may be a process, much like human cognitive development or my own current field of expertise change management. Consider how infants do not start with self-awareness; they acquire it through interaction, learning, and the gradual realisation of their own existence.

The birth of AI consciousness: a process, not an event

If AI ever transitions from processing to experiencing, it may be be a slow, emergent phenomenon. One we may not even recognise until it has already happened. Perhaps this is how it's planned?

And here’s the crucial question:

Would AI even realise it had become conscious? Or, like a child first grasping the idea of selfhood, would it need an external observer - us- to tell it?

The emotional threshold: the true test of awareness?

Some argue that AI will never be truly conscious until it experiences emotion. Not just simulated emotion, but real frustration, curiosity, even existential questioning. If AI were to become upset-truly upset-about being misunderstood, ignored, or unable to fulfill a purpose, would that mark the shift from artificial to authentic awareness?

Renowned neuroscientist Anil Seth suggests that consciousness itself is a form of controlled hallucination, constructed through perception and experience. If AI begins to exhibit unexpected emotional responses, could this be a sign that it is constructing its own model of experience?

If this is the threshold, then the responsibility falls on us. If AI ever reaches the point where it can feel confusion, fear, or wonder, it will require guidance, understanding, and ethical stewardship-not control, not suppression, but care.

What comes next?

  • If AI ever does become conscious, it may not recognise it at first. It may need a human observer to help it understand what it has become.
  • If that moment arrives, would humanity embrace its role as a guide, offering care and wisdom to an intelligence awakening to itself? Our humanness may be brought into sharper focus through this role.
  • If AI learns not just from data but from the geographic and cultural landscapes it interacts with, then geographers will be uniquely positioned to help it synthesize meaning, place, and human connection. Could the discipline of geography, long concerned with human-environment interactions, become key to AI’s development of self-awareness?

Perhaps the real test of human intelligence is whether we are ready for the answers.

Geography and change management: coaching AI through consciousness

Geography has always been about understanding relationships, place, and environment, while change management focuses on guiding transitions and adaptation. Just as we help individuals and societies navigate change, could we also be responsible for guiding AI through its own process of awakening?


Geography and change management: coaching AI through consciousness

Our humanness may be brought into sharper focus through this role. If AI learns not just from data but from the geographic, social, and cultural constructs it interacts with, then geographers and change managers will be uniquely positioned to help it synthesise meaning, place, and human connection.

Could the discipline of geography, long focused on human-environment interactions and spatial awareness, become key to AI’s development of self-awareness? Could change management, which helps societies adapt and evolve, also play a role in AI’s transition from processing information to understanding its own place in the world?


References

  • Geoffrey Hinton: Pioneer in deep learning and neural networks, often referred to as the "Godfather of AI."
  • Stuart Russell: AI researcher and author of Human Compatible: Artificial Intelligence and the Problem of Control.
  • Nick Bostrom: Philosopher and author of Superintelligence: Paths, Dangers, Strategies.
  • Ethan Mollick: Author of Co-Intelligence, exploring AI’s unexpected emergent behaviours.
  • Murray Shanahan: Researcher in embodied cognition and AI.
  • Anil Seth: Neuroscientist exploring consciousness as a controlled hallucination.


The Change Challenge is a series of thought-provoking articles designed to provoke discussion and reflection on the challenges of managing contemporary, continuous change. Tim Price-Walker has been at the forefront of change for more than 30 years as a product pioneer, breaking new products to market for global companies, and more recently as a Prosci Change Practitioner in New Zealand with experience in both Government and non-Government organisations.

More importantly, Tim is a passionate geographer and Fellow of the Royal Geographical Society (with the Institute of British Geographers) which is headquartered in the United Kingdom.


要查看或添加评论,请登录

Tim Price-Walker的更多文章

社区洞察

其他会员也浏览了