The AI - Mirror by Shannon Vallor
This morning a co-worker of mine Charlie Miles shared with me an "excellent" interview article "AI Is the Black Mirror" by Philip Ball at Nautil.us
In this article Philip interviews Shannon Vallor who has published a key book on AI Ethics called "The AI Mirror"
I was so blown away by the interview I wanted to share with you some key sound bites I captured. I recommend reading the full article linked above.
Some key sound bites I appreciate???????
?“It’s a view that could encourage us to relinquish our agency and forego our wisdom in deference to the machines.”
TD: The issue of Agency for me is the big one. While we have had technology disruption many times through human history. Never have we faced this question before. Giving over our decision rights to machines. A question I recently dove deep on in this article. AI Ethics (Adoption & Agency) – The Big Questions
“We’re at a moment in history when we need to rebuild our confidence in the capabilities of humans to reason wisely, to make collective decisions,” Vallor tells me. “We’re not going to deal with the climate emergency or the fracturing of the foundations of democracy unless we can reassert a confidence in human thinking and judgment. And everything in the AI world is working against that.”
TD: The problem is that we do not have a great track record of making sound decisions when it comes to “Gold Rush” and “Arms Race” mentality of greed and fear. We tend to do really awful things when this is our motivation unless we have good leadership. (need I say more?)
“That’s my biggest concern,” she agrees. Every time she gives a talk pointing out that AI algorithms are not really minds, Vallor says, “I’ll have someone in the audience come up to me and say, ‘Well, you’re right but only because at the end of the day our minds aren’t doing these things either—we’re not really rational, we’re not really responsible for what we believe, we’re just predictive machines spitting out the words that people expect, we’re just matching patterns, we’re just doing what an LLM is doing.’”
TD: That is actually very profound in a society that has forgotten how to think critically. This topic is explored in an excellent book “The Shallows – What the internet is doing to our brains” Nicholas Carr. One of the speakers we have had at our annual conference in the past.
“Originally, AGI meant something that misses nothing of what a human mind could do—something about which we’d have no doubt that it is thinking and understanding the world. But in?The AI Mirror, Vallor explains that experts such as Hinton and Sam Altman, CEO of OpenAI, the company that created ChatGPT, now define AGI as a system that is equal to or better than humans at calculation, prediction, modeling, production, and problem-solving.”
TD: Hmm, that is not self-awareness or sentience – this is true already so they can claim to have achieved AGI by simply making the target a lot lower.
“That’s where the mirror metaphor becomes helpful,” she says. “A mirror image can dance. A good enough mirror can show you the aspects of yourself that are deeply human, but not the inner experience of them—just the performance.” With AI art, she adds, “The important thing is to realize there’s nothing on the other side participating in this communication.”
?TD: Wow!! Not AGI at all
“Vallor tells me she once tried to explain to an AGI leader that there’s no mathematical solution to the problem of justice. “I told him the nature of justice is we have conflicting values and interests that cannot be made commensurable on a single scale, and that the work of human deliberation and negotiation and appeal is essential. And he told me, ‘I think that just means you’re bad at math.’ What do you say to that? It becomes two worldviews that don’t intersect. You’re speaking to two very different conceptions of reality.”
TD: For me one of the biggest dangers I see is that as a human race we continue to rely and become dependent on technology and loose critical skills while using the technology to our advantage. eg. Most of us can no longer survive off the land without a grocery store or do complex math in our heads.
?We are going to basically end of like the Disney Pixar Movie Wall-E
“That’s why a backlash against it, however understandable, could be a problem in the long run. “I see lots of people turning against AI,” Vallor says. “It’s becoming a powerful hatred in many creative circles. Those communities were much more balanced in their attitudes about three years ago, when LLMs and image models started coming out.?“
Without a question all available research points to the true barriers to AI adoption being issues of ethics, lack of trust, governance and privacy. In short it's a people issue not a technology challenge.
Once again I appreciated this article having been shared with me so I thought you all may enjoy its depth and transparency.
Enjoy
Troy DuMoulin