Building Trust In AI: The Case For Transparency
Bernard Marr
?? Internationally Best-selling #Author?? #KeynoteSpeaker?? #Futurist?? #Business, #Tech & #Strategy Advisor
Thank you for reading my latest article Unleashing AI Sounds: The Best Tools for Music, Voices, and Effects. Here at LinkedIn and at Forbes I regularly write about management and technology trends.
To read my future articles simply join my network by clicking 'Follow'. Also feel free to connect with me via Twitter , Facebook , Instagram , Podcast or YouTube .
AI is rapidly transforming the world of business as it becomes increasingly woven into the fabric of organizations and the day-to-day lives of customers.
However, the speed of this transformation creates risks as organizations struggle with challenges around deploying AI in ways that are responsible and minimize the risk of harm.
One of the cornerstones of responsible AI is transparency. AI systems – including algorithms themselves as well as the data sources – should be understandable so we can comprehend how decisions are made and ensure it’s done in a fair, unbiased and ethical way.
Today, many businesses that use AI are taking steps towards ensuring this happens. However there have been cases where use of AI has been worryingly opaque.
Here we will look at real-world examples, good and bad, that illustrate the benefits of transparent AI and the dangers of obscure or unexplainable algorithms.
Transparent AI Done Well
When Adobe released its Firefly generative AI toolset, it reassured users that it is open and transparent about the data used to train its models, unlike other generative AI tools (e.g., OpenAI’s Dall-E, it published information on all of the images that were used, along with reassurance that it owned all the rights to these images, or that they were in the public domain. This means users can make informed choices about whether to trust that their tool hasn’t been trained in a way that infringes copyrights.
Salesforce includes transparency as an important element of “accuracy” – one of its five guidelines for developing trustworthy AI. This means that they take steps to make it clear when AI provides answers that it isn’t sure are completely correct. This includes citing sources and highlighting areas that users of their tools might want to double-check to ensure there haven’t been mistakes!
Microsoft’s Python SDK for Azure Machine Learning includes a function called model explainability, which in recent versions is set to “true” by default. This gives developers insights into interpretability , meaning they can understand the decisions and ensure they are made fairly and ethically.
?Cognizant recommends creating centers of excellence to centralize AI oversight, allowing best practices around transparency to be adopted across an organization. Understanding what steps are being taken to ensure AI is accountable and explainable means these practices can be iterated across all AI projects in a responsible way.
Transparent AI Done Badly
OpenAI – creators of ChatGPT and the image generation model Dall-E – has been accused of failing to be transparent over what data is used to train their models. This has led to lawsuits from artists and writers claiming that their material was used without permission. However, some believe that OpenAI’s users could face legal action in the future if copyright holders are able to successfully argue that material created with the help of OpenAI’s tools also infringes their IP rights. This example demonstrates how opacity around training data can potentially lead to a breakdown in trust between an AI service provider and its customers.
领英推荐
Other image generators – including Google’s Imagen and Midjourney – have been criticized for overly depicting professionals as white men and historical inaccuracies, such as showing the US Founding Fathers and German Nazi soldiers as people of color. A Lack of transparency in AI decision-making hinders developers from easily identifying and fixing these issues.
In banking and insurance, AI is increasingly being used to assess risk and detect fraud. If these systems aren’t transparent, it could lead to customers being refused credit, having transactions blocked, or even facing criminal investigations while having no way of understanding why they have been singled out or put under suspicion.
Even more worrying is the dangers posed by non-transparency around systems and data used in healthcare. As AI is increasingly used for routine tasks like spotting signs of cancer in medical imagery, biased data can lead to dangerous mistakes and worse patient outcomes. With no measures in place to ensure transparency, biased data is less likely to be identified and removed from systems used to train AI tools. ??
The Benefits Of Transparent AI
Ensuring AI is deployed transparently is essential for building trust with customers. They want to know what, how and why decisions are being made with their data and have an inherent distrust of “black box” machines that refuse to explain what they are doing!
On top of that it allows us to identify and eliminate problems that can be caused by biased data, by ensuring that all the data used is thoroughly audited and cleansed.
Last but not least, the amount of regulation around AI is increasing. Legislation such as the upcoming EU AI Act specifically rules that AI systems in critical use cases must be transparent and explainable. This means that businesses using opaque, black-box AI could leave themselves open to big fines.
Building transparency and accountability into AI systems is increasingly being seen as a critical part of developing ethical and responsible AI. Although the highly complex nature of today’s advanced AI models means this isn’t always straightforward, it's a challenge that will have to be overcome if AI is to fulfill its potential for creating positive change and value.
About Bernard Marr
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books , writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.
He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world. Bernard’s latest book is ‘Generative AI in Practice ’.
Sr Mortgage Underwriter
6 个月There is nothing artificial about the amount of intelligence it takes to build this technology.
?? Generate Leads and Sales Through Search Engine Optimization; specialized for Law Firms, Veterinarians, Local Business and Ecommerce Sites ????
6 个月Great read on the importance of transparency in AI! Highly recommend checking this out.
Digital & Information Technology Associate Director | PMP?, ITIL, CRISC
6 个月Great article, Bernard Marr! I?would also add that being culturally aware is important to building trust. Designing AI systems that work seamlessly across cultures and avoid causing offense is paramount. But it goes beyond that—promoting diverse viewpoints and avoiding biases is essential for building trust with AI. This?is especially true in AI translation, where maintaining cultural nuances is key to clear communication.
Founder & CEO of All-Rounded Virtual Mom, SoLemMi.com, TrainCan At Amore Paradise, Dr. Wong Foundation, HumaneData, Shyan-Quintessa.com & Malaysian Footprint Magazine. Looking for investors!
6 个月Kindly do a good search about our organization (TrainCan At Amore Paradise) and start now to fill in the online membership application form! Thank you for your support! We encourage people to marry a "billionaire" through TrainCan At Amore Paradise (www.facebook.com/TrainCan)! Note our Merchant Collaboration Program, https://forms.gle/Gy1bQ2jkkjF5N4cc6.