The Opportunities Lost with Human Replacement
Original post on Medium: The Opportunities Lost with Human Replacement | by Sam Bobo | Jan, 2025 | Medium
“The Giver” by Lois Lowry is a dystopian novel set in a seemingly perfect society where pain, fear, sadness, and all emotions are suppressed. The movie further depicts the removal of emotions by casting scenes in grayscale, as to eliminate the joy derived from the sensation of color perceived in the world around them. The story follows Jonas, an eleven-year-old boy who is selected to be the Receiver of Memory. This role involves receiving memories of the past from the current Receiver, known as The Giver.
As Jonas learns about the true nature of his society through these memories, he discovers the depth of emotions, colors, and experiences that have been eliminated to maintain order and sameness. As Jonas obtains memories, he is overwhelmed by the sheer emotions flooding his person, permitting him to experience them in massive extremes due to the deprivation. This newfound knowledge of emotion leads him to question the community’s rules and the cost of their so-called perfection
Parallels can be drawn to the massive acceleration of agentic AI capabilities flooding the Artificial Intelligence community! In recent past we’ve experienced Marc Benioff describe deploying an army of autonomous Agents to handle inbound contact center inquiries; the value, scaling to infinitum as these Agents are deployed in lieu of human representatives juxtaposed against the growing fear of the populus grappling with replacement in the workplace due to AI. The reality, in fact, is that for-profit organizations are incentivized for profit, meaning they possess an obligation to stakeholders to continuously increase the value generated by the business as a whole, including cutting costs, that includes favoring AIs over humans where automation is obvious.
To start out with why, I will make a bold claim, the cost of eliminating roles will outweigh the costs of error made by an AI system, which, over time, will only gain more “intelligence” (quotes intentional as AI does not possess intelligence, rather a probabilistic machine). What do I mean? Take any routine task done by a human, say, as a human resources worker filtering through qualifications for a potential job hire. A human may spend 15 seconds reading through a resume but an AI agent might not only read faster (on the order of milliseconds) but both retain the “memory” of said candidate but correlate them with the job description, scaled to the entire candidate pool. While the AI might overlook qualified candidates, the fact of the matter is that the selected person to be offered the position might still suffice. Is this ethical? Many argue no, claiming biases and the like, however, the time cut from filtering through those resumes may be enough to warrant an AI agent to replace.
Let’s visualize, for a moment, society accelerated Artificial Intelligence and agentic capabilities to nearly replace all mundane tasks (and the positions whom performed those tasks). Economists would argue that, with retraining (making assumptions here), that our production possibility frontier would increase drastically, higher profits would be realized, and society will continue to push he boundary of innovation. Sounds idealistic, correct? Let’s shift slightly to human interactions. What happens when conversations are offloaded to bots/agents? Content can be massively generated including art, videos, and the spoken word simply by chatting with AI bots in a conversational manner. Post COVID-19 pandemic and in the age of massive social media consumption, alongside heightened pressure to produce and standout, our world is entering a loneliness epidemic, as mentioned by a Harvard School of Education Study. Truthfully, it seems as though agentic capabilities and replacing some human-based work might be sending us in a world of emotionless, lonely, grayscale world such as in the Giver.
领英推荐
Over the past few months, I’ve engaged with people and articles that have given rise to this analogy and prompting its creation, whereby my intention is to share where humans can not be replaced and the dangers of viewing AI as a replacement for human capabilities rather than a tool.
The grand point I am seeking to make here is that removing emotion and descending into a “utopian” society of efficiency removes the opportunity of applying what makes human’s human to shift the trajectory of an individual for the better (yes one could argue the counterpoint but I am an optimist). AI practitioners should be shifting our mindsets towards depicting AI as a tool rather than anthropomorphizing AI.
I recall my time back at IBM when Conversational AI capabilities first emerged where our claim was that AI was scaling human expertise. That statement defines a paradigm where humans use AI as a tool rather than a replacement.
Yes, we all scream at call center agents because we are upset and frustrated about a situation, however, emotional matching and the satisfaction of helping another person can not be replaced. Instead, equip the contact center representative to extract information, summarize it, and present options to communicate back with the customer as interactions are not a one-size-fits-all. This, again, augments the representatives skills and still maintains the human element. In the classroom, use AI to help research for a paper or generate study cards for memorization, not as a Teaching Assistant.
Let’s take “The Giver” as a lesson that utopian societies are not always utopian. Humans are humans and can not be replaced as we diminish the skills, perspectives, and differences that make us unique and with collaboration, innovate and push the world forward in unfathomable ways!