AI has a trust problem.
Brian Evergreen
Author of Autonomous Transformation (Wiley) | Former Microsoft, Accenture | Senior Advisor, Researcher, and Keynote Speaker
Hello, and welcome to Future Solving, my newsletter for leaders and managers navigating what it means to lead in the era of AI.
Today, I'm going to write about Trust and AI.
It's been well established that roughly 87% of AI projects fail.
It's also been well established that AI promises a new era of economic growth.
So what's the problem?
Is it a technology, business, culture, or industry problem?
According to research released by Salesforce last week, it's a trust problem.
And the news that broke yesterday about OpenAI using Scarlett Johansson's voice after she said no is a great example of this.
Trust is more than a moral problem.
Some people think of trust as an exclusively moral concept.
When you ask them about "trust in AI," they discuss ethics, transparency, representation, and bias.
Others think of trust as a safety concept: security, privacy, and control.
Still others think of trust as a value concept: accuracy and usefulness.
What struck me about the research is that:
Hot take: Trust is about morals, safety, and value.
If an AI system exhibits bias, it can't be trusted morally or for your brand.
If an AI system doesn't keep your data secure, it can't be trusted legally or for maintaining customer retention.
领英推荐
If an AI system is inaccurate, it can't be trusted with business processes.
If an AI system can't be controlled, it can't be trusted with anything.
And there's another angle to this:
How trustworthy is the organization that created the AI system? (e.g. OpenAI using Scarlett Johansson's voice after she said no)
If it's been developed by a new company, what happens to your data if that company goes under or is acquired?
If it's been developed by an established company, do you trust that company to keep your data private?
Read the full research article that inspired this newsletter episode here.
I'm researching and developing a new, practical tool for navigating trust in the era of AI, and I would love your feedback—do you agree with this analysis?
Anything you would add?
Thanks for reading,
Brian
Whenever you're ready, there are 5 ways I can help you:
1. AI Strategy Fundamentals: My flagship course on how to develop an AI strategy for your organization. I share over a decade of AI strategy expertise, proven methods, and actionable strategies. This course sets the stage for a new era of value creation with artificial intelligence. Join leaders from Microsoft, Accenture, Amazon, Disney, Mastercard, IKEA, Oracle, Intel, and more.
Enterprise licensing is also an option if you'd rather schedule a specific cohort for your team or organization. Connect here if you're interested in learning more about enterprise licensing.
2. Future Solving Advising: Join hundreds FORTUNE 500 C-level executives and startup founders who have leveraged my advisement on AI, the future of technology, and how to develop a vision and strategy for retaining or expanding their market position or create new growth categories.
3. Future Solving Workshops: Join 25+ of the FORTUNE 500 and NASA, who have gained competitive advantage in the era of AI by leveraging new frameworks from my background and my book, Autonomous Transformation, to set a vision and strategy and spark action.
4. Make a statement at your next conference by picking up a “ChatGPT is not your AI strategy” T-Shirt (make sure you read the article first!).
5. Make sure your event is inspiring and actionable by booking me to speak.
AI Advisor | Author: “AI Leadership Handbook” | Host: “What’s the BUZZ?” | Keynote Speaker
10 个月There was some great academic research done in the early 2000s on trust in automation. One of my favorite examples that illustrate trust in the system is whether the system performs according to the user’s expectations > Does it actually do what it says it will do? Does it behave in the same way under identical circumstances? …and if it doesn’t, how does that impact trust. So, fast-forward to 2024 and to LLMs, does your GenAI app deliver results that you expect? Does your copilot work as expected? And ultimately: Do you trust the results?
Chief Digital Officer | Fortune 500 Digital Transformational Leader | Cultural Transformation | Technology Innovation Strategy | AIML | Data Insights/Analytics | Commercial Marketing and eCommerce
10 个月Brian Evergreen, thanks for sharing these insights on Trust and AI! ?? Trust is crucial for AI success and impacts revenue and adoption. Ensuring accurate and secure data is key. The recent OpenAI incident highlights the urgency. Building trust isn’t just ethical; it’s essential for growth and innovation! ????