Why Trust in AI = Improving Trust in Societies Simultaneously
David Bray, PhD
Principal, CEO, Global Keynoter | Named One of "24 Americans Changing the World" by Business Insider | Leader of Transformative Change in Turbulent Environments Involving Tech, Data, & People
Back in May, I had the honor of providing the opening keynote for an interdisciplinary event organized by Prof. JP Singh, Distinguished University Professor at the Schar School of Policy and Government at George Mason University, and Richard von Weizs?cker Fellow with the Robert Bosch Academy in Berlin. If you haven't met JP, he is definitely both someone to meet and watch for the release of his deep research into the different AI Policies of nations around the world - it's much more nuanced than most pundits and news headlines would have you believe with regards to the AI Policies being advanced by different nations (i.e., it's not just what Europe, China, and U.S. are ordaining).
My keynote focused on the topics of Why trust matters in AI? How can we achieve it? - and there's a video recording of the twenty-minute talk below. I won't give away everything that I present except to recommend that we each consider whether the action-oriented steps we need to do to improve Trust in AI are *almost exactly* the same steps we need to do to improve Trust in Societies . This includes steps to improve trust among organizations , the public, governments, national security professionals, media platforms, and other networked actors globally.
Personally I find the extreme AI hype (it's going to only be wonderful) or AI fear (it's going to risk destroying us all) fatiguing, and as one who has been urging folks to consider how AI will transform how we live, work , and co-exist since 2016 , I'm hoping some pragmatism can shine through the hype/fear cycles out there.
A Pluralistic Panel on AI In Peace, Conflict, and Turbulence
After the keynote, there was a great panel discussion that included Dr. JP Singh himself, plus Jacqueline Acker - CIA, Dr. Neil Johnson - GWU, Denise Garcia - Northeastern University, and Branka Panic - AI for Peace the different elements of AI in countering hate and disinformation, helping communities build peace, in battlefield conflict situations, and in the challenges of non-state actors seeking to introduce turbulence and disorder. A video of that discussion is below - and once you watch, it I think you'll agree it was a nuanced, reasoned discussion on these complex topics:
I'll close this post with three points I shared on a Sunday call with the People-Centered Internet coalition as part of a panel with friends and colleagues Vint Cerf , Anthony Scriffignano, Esther Dyson, Divya Chander, and Sarah Novotny, Kevin Clark, and several more. In that discussion, I closed with one concern, one call to action, and one message of hope.
Three Positive Steps Forward
One Concern - we seem to be repeating several of the patterns of the original Victorian Era. During that time there was rapid industrialization, technological progress, and both polarizing disinformation through some sensationalistic newspapers/news platforms (seeking to sell papers) as well as widening political strife in the United States (akin to our present reality). During that time folks placed too much emphasis on what people signaled - specifically virtue signaled - and less on what they did when no one, or very few folks, were looking. I hope we don't fall into the trap of paying too much attention to virtue signals at the omission of what folks are actually doing to either address (or not) the important issues of our day?
领英推荐
One Call to Action - related to this, I've written before about the challenges that a lot of people in our world now feel overwhelmed with a sense of "learned helplessness" to include a sense that the challenges of the world, ranging from the economic to political strife as well as climate change and accelerating technological advancements, are just out of their control. This loss of control is correlated to folks feeling like they have no ability to shape their future - and risks becoming self-fulfilling where if folks feel like they have no control, they will relinquish control and spiral into a cycle of anxiety, isolation, sadness, anger, and frustration. When it comes to addressing AI, there's a lot of anxiety already out there - and a decided lack of realistic non-dystopian narratives - that we need to remedy alongside helping people overcome any sense of helplessness to realize they do have choice, agency, and data dignity in the digital era.
One Message of Hope - if there's a value to AI, it is its ability to learn and synthesize a large amount of data. Not always perfectly and not always in ways akin to what human mental models would do - however that ability to learn and synthesize might allow humans to better experience in a realistic format that includes words, images, and potentially videos with sound the experiences of others. Specifically, can we use AI to help "walk a mile in another person's shoes " so that we can better understand each other's perspectives?
This includes using AI to hold a digital mirror up to our own actions and patterns to see, perhaps in a safe private way, when perhaps we're amplifying our existing thoughts, emotions, or biases - vs. perhaps pausing to reflect and ask what am I missing here, or why might my emotions be triggered here, or what biases (confirmation bias, sunk cost bias, or other externalized or internal biases) might be influencing our thoughts?
To close, in the GMU panel with JP Singh, I closed with the thought that perhaps we might use AI to amplify human strengths, mitigate human weaknesses - both individually and collectively, and potentially make us better people. I'll end with this question:
If we were to commit to an AI "Manhattan Project" - wouldn't it be timely and fitting if the focus was (1) how people could better know whether they could trust both AI and social institutions - and (2) how both AI and social institutions could amplify and uplift humans everywhere?
p.s. In the interim, if you're interested in what the National Academy of Public Administration is doing regarding AI and Public Service, here's a link as well: https://www.dhirubhai.net/pulse/call-action-ai-public-service-national-academy-david-bray-phd
Onwards and upwards together.
Executive Director at Coach's Dream Foundation
1 年David Bray, PhD Excellent post and spot on analysis of the present human condition that's being reflected in something that seems foreign to some so all of the other internal and external things going on inside are being amplified onto this thing being portrayed as unknown, so therefore anything not being dealt with in other areas of life now have something tangible (though intangible) to be put on. It can be scary dealing with uncertainty or not knowing in our own lives so it becomes easier to put all of that fear, angst, worry, or whatever onto something else, and right now the media is serving up AI as a target/outlet on a regular basis. The unknown can always be scary, so how do we make the unknown become the known when the various supposed "experts" can't seem to agree on what people should expect, at the same time those supposed "experts" tend to leave out the part where their views are actually directly affected by their financial loyalties. It's difficult to trust what is put out there when we come to realize on a fairly regular basis that an "expert" happened to again leave out their conflicts of interest, especially when involving things affecting national policy and lots of people.
US Army CZTO
1 年i would offer the core issue with both when it comes to trust...is our belief in how AI may improve on our human ability to do precise scientific experimentation with the variables... there is often a desire to believe we have controlled variables before making a decision...and more often than not, that is just not the case... the question is, will AI improve on this situation? or will it in fact possibly draw a false sense of security on what is being told to be true...
Mindful Innovation Labs | HCD Expert | Executive Advising | Royal Society of Arts Fellow
1 年"...Specifically, can we use AI to help 'walk a mile in another person's shoes' so that we can better understand each other's perspectives? This includes?using AI to hold a digital mirror up to our own actions and patterns?to see, perhaps in a safe private way, when perhaps we're amplifying our existing thoughts, emotions, or biases - vs. perhaps pausing to reflect and ask what am I missing here, or why might my emotions be triggered here, or what biases (confirmation bias, sunk cost bias, or other externalized or internal biases) might be influencing our thoughts?" Yes. Appreciate your thoughts and thoughtful questions here, David Bray, PhD!
Consultant & Member of Advisory Board at ZENCOLOR?
1 年Great thoughts - let’s see how the power mongers manage this. I am less than confident.
I started using AI twenty years ago. It is not the next shiny thing. We need to take a deep breath and put things in context.