Building Trust in Generative AI
Source: https://www.ambit.ai/resources/all

Building Trust in Generative AI

The news is full of promise and caution for generative AI products. On one hand, articles celebrate its potential to transform work and life as we know it. On the other hand, we hear about hallucinations, bias, and unexplained drift (when models answer less accurately over time, source 1 below). Enterprises will continue to roll out internal products like chatbots, but with all the headlines on how unreliable this tech can be, how can we help employees trust gen AI?

?

First of all, the technology must deliver what is promised. This means that communications from the project team or from leadership and other change champions must appropriately hype it. Realistic expectation-setting will set the stage for employees to use it with a realistic mindset: if I'm promised total job reinvention, and I try asking a chatbot all sorts of questions only to get error messages because pilot functionality is limited to support questions, I move into distrust. It will be harder to get me to use it even as functionality increases in future releases.

?

Continuing on with transparency, share how data is being collected and used. Many companies have spent months telling employees that they cannot use ChatGPT, so why is it now ok to use 'MyCompany's ChatGPT'? Why is your enterprise solution different? Maybe no one explained data privacy concerns. Now's your chance. What data is being collected, who can see it, how is it being used? There are different expectations of data privacy around the globe and some employees may have more interest (and legal concerns) with this than others.

?

The next concern is around accuracy. Tech teams will ensure a certain level of accuracy before releasing, but having a transparent UI (e.g. links to sources) can help employees review specific results, building trust when a generated result is accurate. Make it easy for employees to give feedback on the results, and quickly respond to trends of inaccuracy, areas for improvement, and potential new functionality. Of course, for employees to know they should review results, they need a baseline understanding of generative AI: that it pulls from existing data and makes 'best guesses' on words and sentences based on patterns. I need to understand that accuracy may not be 100% so that I don't blindly trust everything that is generated (overtrust). They also need to feel psychologically secure enough to challenge it.

?

That leads to the another concern: if generative AI might not be 100% accurate, why would anyone trust it for anything? Each company will have their own risk tolerance for what content should be produced with gen AI and what content should not. For chatbots, sensitive HR questions may not be worth the risk of inaccurate responses and a company may keep to traditional Q&A technology for that content. But prompts like 'write an email to my team on XXX that emphasizes the importance of YYY' or 'what are the key steps for coming up with a new strategy' or 'put this information into a story so I can tell it during a presentation'? After all, humans create those types of artifacts all the time and humans aren't 100% accurate either. How much wordsmithing happens to a human-generated piece of content? Is it any different if gen AI comes up with the first draft? This is a mindset shift for employees: have gen AI come up with a generic draft knowing that you'll likely have to layer in specifics. It’s like gen AI is a junior employee: you know you'll have to check the work, you know you'll have to edit it. It's all in the expectation-setting, and the skills development .

?

Thinking of a generative AI chatbot as a fellow employee, thinking of it as human-like, has its own risks though. Anthropomorphizing has mixed effects on trust and adoption. On one hand, humanizing it turns it into something more familiar-feeling, which could help adoption. But that can lead to unrealistic expectations because employees can ascribe human-like reasoning and purpose. It can also lead to distrust because employees can suspect it has its own motivations. (sources 2,3 below). Companies will have to finely balance this to ensure employees properly trust a bot.

?

Peer-to-peer inspiration is an important component too. We trust our coworkers and our friends to tell us what works, sometimes more than official communications. Consider crowd-sourced prompt libraries, where all employees can submit what's worked well for them, and highlighting success stories (not anonymized). Highlighting challenge stories (where someone's prompt didn't work) and opening up to collaborative problem-solving might help too - 'it doesn't work well if you use it like this, but we can figure out a way that does work!'

?


Sources?

Special thanks to Grant Luckey for chatting through behavioral science considerations.

  1. Zumbrun, Josh. "Why ChatGPT Is Getting Dumber at Basic Math". Wall Street Journal. 2023-August-4 https://www.wsj.com/articles/chatgpt-openai-math-artificial-intelligence-8aba83f0
  2. Salles A, Evers K,?Farisco?M. Anthropomorphism in AI. AJOB?Neurosci. 2020 Apr-Jun;11(2):88-95.?doi: 10.1080/21507740.2020.1740350. PMID: 32228388.
  3. Yao, Di. “Addressing the Risks and Consequences of Anthropomorphizing AI”. LinkedIn. 2023-March-22. https://www.dhirubhai.net/pulse/addressing-risks-consequences-anthropomorphizing-ai-di-yao/



?#generativeai ?#genai ?#chatgpt ?#openai ??#azureopenai #microsoftcopilot ??#changemanagment ??#communications ?#learning ?#psychologicalsafety ?#AI ?#mindfulai #ethicalai #responsibleai #trust

Note: my postings reflect my own views, which are subject to change when provided more information and as technology evolves.

Sucheetra Arora

Life-long Learner I Social Learning & Collaboration in the Age of AI I Communications I Change Management

1 年

Loved the insights! It will be a change journey for employees and most importantly agree with peer to peer interaction!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了