The End of Human-Centricity?
A Quick Reflection on the Human in the age of AI
As a parent and anthropologist working in innovation, leadership and futures, I often find myself reading or reflecting on AI or participating in discussions or meetings about the role of Artificial Intelligence in the workplace and in education. In a recent meeting I was particularly interested in a comment about how students are being asked to reflect on how much of their work is generated by AI tools and the possible implications of this. This got me thinking, perhaps we as a society should also reflect on the implications of our work relying solely on human cognition—our biases, blind spots, and limited worldview.
The institution, as are so many individuals and organizations increasingly doing in the age of AI, is now positioning its approach to AI as "human-centric." Im constantly drawn back to the question of but what do you mean when you say human-centric and what does human-centric really mean in the age of AI. Given the amount of time devoted to questioning AI am I the only wondering shouldn't we devoting more time to critiquing human centricity as well?
My immediate reaction is to connect human-centric to Human-centric innovation or human-centric design. Something which has long been a celebrated approach, prioritizing human needs and behaviors in the innovation process. However, it is important to recognize that Human-Centric Innovation is rooted in a specific Californian tech utopian vision of humanity—one shaped by the culture of Silicon Valley. This model has proven flawed, particularly with its 'move fast and break things' philosophy, which often disregards broader societal and ecological impacts.
By focusing narrowly on individual human desires, it has frequently failed to consider the interconnectedness of human and non-human systems, leading to unintended consequences that undermine long-term sustainability and equity. But in the context of AI, we must ask: Are we truly considering the full spectrum of human experience when we use this term? Or are we relying on outdated, narrow definitions of what it means to be human, shaped by cognitive biases and cultural blind spots?
领英推荐
As humans, we are deeply flawed—our brains, while extraordinary, are prone to error. Biases, mental shortcuts, and deeply ingrained societal norms can all skew our perspective. AI, in many ways, holds up a mirror to these limitations, exposing the need for us to question the human-centric models we have been relying on for decades.
This is where I see an opportunity for what I would call a more?Conscious Innovation approach. Instead of clinging to strictly human-centered approaches, we need to embrace a more polycentric view—one that accounts for the interconnectedness of humans, ecosystems, and even the technology we create. This is not about abandoning human needs but rather expanding our perspective to design in ways that are more ethical, sustainable, and inclusive.
By expanding our understanding of innovation to be more inclusive of non-human elements, we can better equip ourselves to face the complex challenges ahead. The future of innovation lies in moving beyond the traditional human-centric models. It’s time we rethink what it means to design for a world where humans and AI coexist, pushing our boundaries to create more thoughtful, ethical, and sustainable solutions that benefit not just us but the entire ecosystem.
*** PS... I think any comment I make on AI should have the following caveat.... There’s a growing expectation—what the philosopher Bernard Williams might have critiqued as a "fetish for assertiveness"—that everyone must have a clear, definitive opinion about AI. It’s almost as though not having a strong stance is seen as a weakness. However, I believe this can be counterproductive, especially given how fast the landscape of AI is evolving. The complexity of this technology requires a degree of humility and reflection, rather than rushing to assert certainty.