The Edelman barometer of trust report always makes for good reading.? The 2025 edition is no different.? And being involved in the AI field myself, I found the combination of these two results very interesting.?
For one - we trust businesss.
But we don't trust the AI solutions ... (produced mostly by by businesses).
I will leave it to others to speculate on what this means for the sky high valuations that are "AI washed" these days. My question though is - Why? And what can be done about it?
I think the reasons for "Why" are reasonably well known. Some key ones :
- Lack of Transparency: Many AI systems, particularly deep learning models, function as "black boxes." This means it's incredibly difficult to understand how they arrive at their conclusions. This lack of transparency makes it hard to trust the system, debug errors, and ensure fairness and accountability. Without understanding the reasoning behind an AI's decisions, it's difficult to identify and correct biases or ensure that the system is acting ethically. Remember, atleast for now, all output is directed at human action / affecting real humans...if they don't understand it, they won't trust it.
- Hallucinations: AI models can sometimes "hallucinate" – they can confidently produce incorrect or nonsensical outputs. The models can produce outputs that are incomplete, contradictory, or simply wrong. These hallucinations can have serious consequences, especially in critical fields like LifeSciences. Haven't we all had experience of a friend or a salesman making claims which are either immediately (or more dangerously after a period of time) discernable as falsehoods? How often will you go back to them for trusted advice after this happens once or twice??
- Data Privacy and Security: AI systems often require vast amounts of data to function effectively. This data can include sensitive personal information, such as medical records, financial data, and even location history. This raises serious concerns about data privacy and security. There's always the risk of data breaches, where sensitive information could be accessed by unauthorized parties. Additionally, there are concerns about how this data is used and who has access to it. If were dealing with humans, the English language has a rich vocabulary of words to describe such people ;-)
What can be done about regaining and improving trust in AI solutions? Three things :
- Humility : Builders of AI solutions should stop pretending that the solution is all powerful and can solve world hunger, climate change & bring global peace - all within a day ;-) . We need to be transparent on the limits of our technology and work with customers to drive adoption
- People in the middle : "Man in the middle" is an old cybersecurity challenge. I think "people in the middle" is the solution to AI challenges. Build solutions for people to use...enable them, don't look to replace them. Take away the most tedious and boring and error prone elements of their daily lives and given them contextual assistance to help drive them to be more successful in their tasks
- Focus on value: Trust comes from making and keeping committments. In the business context, I translate that to understanding and delivering value. However "small" it may be, understand what's valuable to the client, be honest about what difference you can make and then deliver on that promise. Trust will follow.
What do you think? How can we collectively help improve adoption and build trust?
GTM Strategy & AI-Driven Sales Automation | 10+ Years in CRM, Demand Gen & Business Growth | Microsoft Dynamics 365, Salesforce, HubSpot | AI-Powered Insights & Data-Driven Decision Making
1 个月The idea of "improving trust in AI" resonates deeply with me. Trust is the foundation for meaningful innovation, and as we approach 2025, fostering transparency and accountability in AI will be crucial. Excited to hear more perspectives on this.
Consultant, Architecting on DPI & DPG for National Scale, Certified Independent Director, Certified ESG Expert, Certified Digital Director, Cloud Architect, Leadership, Delivery, Technology & Innovation, Mentor
1 个月Srinivas Padmanabharao a new technology is never without its challenges. Right now, to me AI is in the hype cycle. Everyone wants to have a skin in the game for the fear of being left out, but most don't either have anything concrete or are still grappling with what to make of it all. The issues and challenges with the technology are very real, but I do feel that the space is very powerful and it will evolve and see better solutions coming out soon. I am more worried about 1. what does it means for job market. Many jobs will be lost because where ever it was head count based, things will be done better and faster with AI assisted solutions and hence need for lesser workforce. More people become unemployed. How do they get their livelihood? People say new opportunities will come, but no one really has clarity. 2. where are we headed? I cannot keep myself from thinking of all the sci-fi movies where machines eventually take over. ??