Navigating Our AI Moment with Loving Resistance
Bob Hutchins, MSc
?? Bridging Silicon & Soul | AI Literacy | Digital Anthropologist | Author | Speaker | Human-Centered Marketing & Media Psychology | PhD Researcher in Generative AI | EdTech | Media Voice
As AI proliferates, it promises immense benefits but also risks to human flourishing. Neil Postman’s notion of “loving resistance” offers a framework for responsibly shaping its trajectory. By balancing skepticism with measured adoption, we can integrate AI on society’s (the human's) terms, not vice versa.
I first encountered Postman’s wisdom several years ago while studying technology’s cultural influence. His concept of loving resistance immediately resonated as a thoughtful posture toward integrating innovations.?
Postman advocated resisting new technologies’ dehumanizing tendencies while avoiding irrational rejection. Loving resistance means pausing to carefully study impacts before adoption. Questioning downsides, not just benefits. And ensuring alignment with enduring human values.
Specifically, Postman introduces the idea of "loving resistance" in the foreword to his 1999 book Building a Bridge to the 18th Century. He proposes it as a mindset for responsibly integrating new technologies in a way that preserves human dignity and ideals.
In Technopoly (1992), Postman also discusses similar concepts of developing an ethic of technology that resists surrendering culture purely to technical demands. He advocates continuous questioning and re-evaluation of technologies even after adoption to discern effects on society over time.
This approach recognizes both the upside and the unpredictability of breakthroughs. It offers a third way amidst polarized reactions of unchecked enthusiasm or blanket condemnation. Instead, Postman called for “disciplined criticism” to avoid being dominated by techniques.
As AI proliferates, I believe adopting this mindset is vital. We must guide integration on our own terms to harness AI for good.
No Single Path
When any new technology erupts, simplistic narratives abound. Early on, opinion seems to be polarizing around AI as either utopia or apocalypse. For instance, Marc Andreessen's recent Techno-Optimist Manifesto is a 5200-word opinion piece that argues that technology (unchecked and unregulated) and free markets are the keys to progress and human well-being, emphasizing its potential for solving problems and improving lives, and challenging pessimistic views of technology.
But complex innovations defy binary framing. AI enables transformative applications across domains like healthcare, education, and sustainability. Unchecked AI also poses risks to privacy, accountability, and human agency.
Well-intentioned experts disagree on ideal governance. Proposed moratoriums on certain applications strike some as prudent caution, while others see obstruction of progress. Calls for regulation and oversight resonate with many, but critics warn of stifling innovation.
On October 30, 2023, President Biden signed an executive order on AI that requires companies to report to the federal government about the risks that their systems could aid, encourages the development of AI in the United States, and instructs government agencies to implement changes in their use of AI.
This is just the beginning, and there will be no quick consensus path forward. But loving resistance offers principles for navigating uncertainty. Asking tough questions, assessing tradeoffs, and iterating based on impacts with care and courage.
Guiding Principles?
AI governance debates often center on high-level rules and restrictions. But ground-level conditions shape outcomes too.
One key is diversifying AI’s development. Like any powerful tool, its creators’ values transfer subtly. Ensuring inclusive participation mitigates bias-by-design risks. Without concerted efforts to expand voices at the table, AI risks becoming an echo chamber for narrow interests rather than a tool for democratizing opportunity.
领英推荐
We must also nurture public understanding. Most citizens lack contexts for weighing AI impacts judiciously. Thoughtful awareness campaigns fostering active inquiry, not passive use, are crucial.
Additionally, we need to balance incentives beyond profit-seeking applications. Research grants, developer codes of ethics, and initiatives exploring AI for social benefit all provide counterweights.
We also need to scrutinize social media platforms closely. Their business models that viralize outrage and provocation incentivize the development of AI systems optimized for grabbing attention by triggering primitive instincts rather than wisdom or nuance. Policy measures should aim to discourage and curb the proliferation of socially corrosive AI applications that bring out the worst in human nature rather than the best.
These conditions raise the odds of integrating AI on human terms. But we can’t just end there.
An Enduring Process?
Loving resistance persists after adoption too. As Postman wrote, “New technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop.”
The shifts in thinking and social dynamics brought about by new technologies often happen gradually. They incrementally and subtly acclimate us over time to changes that once seemed unimaginable. This is why we must continually re-examine and re-evaluate the impacts of AI on individuals and society, even as its use becomes widespread and normalized. We cannot become complacent and must persist in assessing its effects on how we think, what we value, and the nature of our communities.
For instance, how do increasingly personalized feeds shape information diets over the years? Do intelligent tutoring systems inhibit children’s agency and curiosity? When roles like content moderation become automated, what societal biases ensue?
Probing questions like these reveal AI’s compounding influence. Rigorous inquiry must persist post-implementation, not just precede it.
Ongoing vigilance and auditing are critical for AI safety and oversight as well. Regular audits help assess whether embedded ethics and intended utility remain aligned, especially as capabilities advance. We must patiently and carefully monitor for unintended behaviors that can emerge over time.
This process requires nuance and resisting conclusiveness. Even when benefits appear straightforward, judgment should withhold certainty, ever aware of risks. Once fully unleashed into the social fabric, technology's ramifications cascade in unpredictable ways. We must remain cautiously observant rather than complacent.
Humility Over Hubris
Neil Postman believed in approaching technology integration with a kind of respectful caution. He understood that our technological creations are still in their infancy compared to the breadth of human history. If we approach AI with a nurturing mindset, it can grow to be a positive force within our society. However, if we let arrogance lead, we're likely to face negative consequences.
The wise approach to technology is to use it thoughtfully and positively. We should always weigh its benefits against its risks, handling it with awareness and care. AI opens up incredible opportunities, but just because something is possible doesn't automatically make it good. It's our job to guide these new possibilities in a direction that uplifts humanity with ethical integrity.
Postman might have seen AI as something of a mixed bag – it's neither wholly good nor bad. But he left us with a way to deal with its uncertainties intelligently. If we use AI actively, making choices rather than just letting things happen, we can shape a future that reflects our values.
In the years ahead, our democratic values and humanistic ideals will be tested in unprecedented ways. Yet, the door to progress is still open. If we hold on to the principle of 'loving resistance,' we can foster an environment where both people and technology can thrive. I think we should move forward with intention. Anyone with me?
CIO Advisor & Analyst | Distinguished VP @ Gartner | Artificial Intelligence (AI)
1 年This is an interesting and useful way to look at it. I haven't delved into the details. It will be good to weave such concepts into the work that goes into developing products and services, in a systematic fashion. I wonder if current methodologies and approaches are still stuck in the era of prior deterministic technologies leading to ideas like these not seeing the light of day. I hope I am wrong.
Principal Technical Consultant @ Tim Wessels & Associates | Network Security
1 年It has been quite a while since I last read Neil Postman's books, and I think you have aptly used his work to question the value of the latest "shiny new technology," like Generative AI, that raises questions from more thoughtful people. The people promoting AI are not the people who should be deciding how to deploy it and use it. They are already too deep in the hoopla to have a clear mind about what they are doing. Just look at the weirdness of Sam Altman, CEO of OpenAI, which is NOT open about anything, and the mishigas coming from the mouth of the so-called godfather of AI, Geoffrey Hinton.
Co-Founder | Human-centered brand & marketer for tech | Author | Artist | Designer | Speaker | Co-founded Forrest.co | Co-host: The Humanity Sells Podcast
1 年“Loving resistance” is an amazing way to say this.
Manager Sales | Customer Relations, New Business Development
1 年AI offers countless advantages, yet we should approach it cautiously to avoid unforeseen consequences.