Building Trust in Generative AI
There’s an unspoken issue when it comes to generative AI: We are fundamentally afraid of it.?
Of course, there’s reason to be wary: It comes with significant challenges and risks.??
But equally true? AI can become a strategic asset. As I shared in my recent livestream, there are so many reasons to feel optimistic about it. AI:?
??Allows you to differentiate in the marketplace?
??Helps build strong customer loyalty
??Ensures organizational resiliency to the AI risks and challenges that may arise?
But in order to lean into its potential, we need to build trust in it.?
It’s a tall order. Where do you start??
The Pyramid Of Trust
Spend some time with any educator, and they’ll tell you that scaffolding is the key to trust.
When you're teaching students something new, you need to help them step by step; you can’t throw a new concept at them all at once. In many ways, we all need some scaffolding in our adoption and understanding of generative AI.?
The best framework to support this scaffolding is something my co-author Katia Walsh and I like to call The Pyramid of Trust. We based this framework on Abraham Maslow’s Hierarchy of Needs, which proposes that human beings need to have certain needs met (like food, water, and shelter) before they can grow and advance in their self-esteem and personal fulfillment.?
We have a similar idea here: You have to build a strong foundation with safety, security, and privacy before you can move on to fairness. And if you don't have fairness in place, you can't understand quality or accuracy or build towards accountability and transparency.?
Each of these layers has a different job in building trust in AI systems:
Level 1: Safety, Security, and Privacy
To adopt generative AI quickly, you need good "brakes"—safety, security, and privacy. These are non-negotiables for responsible use. “Good breaks” include developing secure systems through access controls, data encryption, and minimization; prioritizing safe tool usage via training; and protecting user privacy. Many current regulations in this area build on existing legislation like GDPR and US privacy laws.
The goal is to build safeguards and security by design into your generative AI systems. This requires rigorous testing and monitoring to guard against threats. HP serves as a good example here—it conducts privacy and security training for all employees every 6-12 months. While this might seem tedious, it's crucial. HP also requires cybersecurity approval for any AI tool, which ensures all safety measures are in place before usage begins.
Every company's approach differs, so understanding your organization's specific requirements for safety, security, and privacy is essential.?
Level 2: Fairness
We know that AI comes with a bias. It’s programmed by people, and fairness is subjective. What does it mean to be fair? Our definitions are unique to the circumstances of our organization, our industry, and even our personal values.?
The truth is this: Every system has a bias, and there’s no getting around that. The question then is two-fold:
Of course, we want to minimize bias and maximize fairness—both because that’s just and also because the clearer you are in this aim, the more you can explain your choices to your people, which in turn helps them build trust. Diversity of data sets best supports you in this mission. You want to:
Level 3: Quality and Accuracy
Similar to fairness, quality and accuracy are defined by an organization.
If you're in transportation or public safety, you're going to want to make sure things are highly accurate—inaccuracy could lead to accidents. But if you're writing or doing some strategic planning, then quality becomes a priority—it's no longer just about accuracy.
领英推荐
Once again, though, our definitions of these terms are subjective. All the more reason to clearly define them for your stakeholders. In content creation, for example, quality is important—but in some situations, speed might hold more weight. Which do you value more??
Making sure you define quality and accuracy—and what you expect of your teams— builds trust.
Level 4: Accountability
Who takes ownership of AI outcomes, regardless of whether they’re intended or unintended? Accountability is about responsibility: It involves defining roles, making sure there's a decision-making process, and getting clear on how your organization will and won’t use generative AI.?
An important part of this level is creating a culture of accountability, where everyone feels responsible for the outcomes. A culture of accountability means your organization has a system for flagging and reporting problems—and there are no repercussions for speaking up.
When something goes wrong in a high-accountability culture, teams can identify and diagnose the problem, solve it, and ensure it doesn’t happen again.??
Level 5: Transparency
The final level of the pyramid focuses on building trust with stakeholders: Employees, customers, vendor system suppliers, shareholders, and community members.
Transparency involves a commitment to continuous monitoring, ethical practices, and clear communication about AI's capabilities and limitations. It includes disclosing how and when AI is being used, especially for decisions that impact people.
One organization I worked with decided to always disclose internally when AI was being used. This allowed them to be aware of the content's source and take extra time to review for quality, bias, and accuracy. Externally, they were more selective about disclosure.
Deciding what to disclose externally depends on whether it's required, optional, or not needed. Best practice is to disclose in high-stakes domains (like healthcare treatment plans or loan approvals). For routine tasks with human oversight, disclosure isn't always necessary, as the use of AI becomes less relevant in building trust.
Final Thoughts
At the end of the day, trust isn’t built by some person “over there”—it’s built by people who use these tools, who ensure that on a daily basis, they:
? Have safety and security protocols in place?
? Ask what fairness looks like
? Understand and prioritize quality and accuracy?
? Exercise and practice accountability and transparency
Trust is built every single day, in every single interaction we have.?
As we deploy AI into our organizations, we must be systematic and intentional about how we build trust. Otherwise, we’re just crossing our fingers and hoping it will happen.
I don’t know about you, but I wouldn't leave trust to serendipity.
If this information was helpful, there’s plenty more!?
?? Sign up for updates and early access to my upcoming book, co-authored by Katia Walsh, which is all about creating a winning generative AI strategy.
?? Join me at Elevate by Future of HR, a 3-week virtual program for next-gen HR leaders.? The program dates are October 8-10, 15-17, 22-24, and registration opens Monday, August 5th. You can sign up here.?
?? Catch my most recent webinars:
Your Turn
What are some of the top challenges you’re having in building trust with generative AI? How have you been able to mitigate those concerns??
. . . serendipity : The great myth. :) :(
Spot on!!! Generative AI presents challenges and risks, but it also offers incredible opportunities for differentiation, customer loyalty, and organizational resilience. As we build our business strategy with trust in AI at its core, we must be deliberate in ensuring it supports our long-term goals and values.
Chief Creative Officer (Ex) M&C Saatchi | Grey SEA | JWT | Multi-award winning brand architect | Keynote speaker | Podcaster and Thinkfluencer | AI driven marketing nerd
1 个月Spot on! Trust in AI isn’t just a nice-to-have, it’s essential for progress. Charlene Li
community leader on hunger issues..food and beverage consultant for stadiums arenas convention centers fairgrounds
1 个月Very helpful!
AI Ethicist | Author of Responsible AI | Keynote Speaker | Founder
2 个月Love the breakdown! This gives a strong visual into the mechanics of building trust - something that can often feel intangible