In Artificial Intelligence we don’t trust
Zara Nanu MBE
Serial Entrepreneur, Investor and Future of Work Expert | Women's Leadership Board @ Harvard Kennedy School | Computer Weekly most influential women in UK Tech
Artificial Intelligence is increasingly used by companies to solve business problems, but people don’t trust it. Is the trust gap too wide?
In a recent lightning Twitter poll, I asked if readers trust Artificial Intelligence. I know – it was a "sweeping generalisation" kind of question, in a world where AI and trust mean different things to different people - so any statistical accuracy is out the window. However, it was enough to show that an overwhelming majority don’t (nearly 70%).
When we set up Gapsquare just over three years ago, we set off on a mission to use machine learning and Artificial Intelligence to create more diversity and inclusion in the workplace. Our aim is to use AI to create fairness in work.
While we are developing AI to help create more inclusion and diversity, we find increasingly that people do not necessarily trust it. At the end of the day – to the end user, AI is this black box in which you define what success looks like, mix it with some data, and hope that it delivers insights that will help you reach that success.
But why do people not trust it, and are they right to do so?
Recent cases with recruitment platforms and self-driving cars suggest that if you are not white and not an Anglo Saxon man, you shouldn’t!
Last year, Amazon has been hitting the headlines because of its AI based hiring tool. The platform was developed by a team of data scientists in Edenborough, and the group used resumes from a 10-year period to train the models to identify top engineering talent. The models were trained to recognize some 50,000 terms that showed up in past CVs and select candidates based on most successful ones. As a result, the algorithms prioritised CVs that included more male dominated language with words such as “captured”, “created”, “lead”, and down scored candidates that had the word “woman” in their CVs (ie. women’s soccer club).
When the company realised that this is happening they worked on making the terms more gender neutral, but the program was soon scrapped as there was no guarantee that it will not continue to discriminate in other ways.
Around the same time, in a completely different field of AI, a study found that self-driving cars are less likely to identify non-white pedestrians. The space of self-driving cars is already complex and burdened by layers of dilemmas around ethics and levels of priorities when it comes to road safety. Now it also unveils algorithmic bias. When researchers looked into how often the models identified pedestrians in the road, the likelihood for white pedestrians to be spotted was much higher, even when they were partly obstructed by blockers in the road.
And Artificial Intelligence is not limited to high tech sector. AI already has an impact on most aspects of our lives. From algorithms that score high performing individuals within a company all the way to AI used to impact our purchasing choices as consumers and voting preferences in elections (and we all know how well Cambridge Analytica delivered AI in the latter cases).
But it’s not all doom and gloom and lack of trust. Despite its current shortcomings such as the repercussions on inequality, AI has the potential to deliver a better world. The question is how do we ensure it works for the inclusive good and that we trust it?
And the answer is transparency.
And in this case, transparency will have to cut across multiple levels – from collection and use of data to defining success together and looking at outcomes with an understanding that they can be flawed.
Data use.In light of Facebook allowing Cambridge Analytica access to millions of users data, no wonder we are all concerned about what data about us is used by whom. It seems both Amazon and Google are being increasingly transparent not disclosing our data to third parties.
This does not mean they don’t disclose it to third parties such as governments and the police. The latest Amazon data transparency report (they seem to have a very light touch understanding of transparency) shows that in the last 6 months of 2018 the company has 2,382 subpoenas, search warrants and other court orders to disclose data and it has disclosed it in some form in 70% of the cases.
But the big transparency question here is not if the tech giants sell or disclose our data to third parties. That ship has long sailed. It’s more about how they make use of it and under what circumstances. Which brings us to the issues around how we define success when we look for answers in data.
Definitions of success. AI relies on data to understand the world, but it also relies on our input of what a beautiful world looks like so that it can learn to create one for us. As an example, when it comes to recruitment or career progression, algorithms are being developed to identify top talent. In order to achieve that we need to feed the platform data about people as well as a definition of top talent. Are we talking about someone who delivers on outcomes, performs under pressure, learns new skills? Or are we talking about someone who has previously had a Senior role, and a degree from Harvard? These questions need to be transparent.
In using AI, we need to be more transparent about what the key indicators of success and beauty in the world we use in order for people to trust that the system has made the right decisions. If Amazon shared their AI based recruitment algorithms with the wider public we would know what to expect and we would know to point out limitations and missing lessons.
Outcomes. At the end of the day, a computer is not a human being and luckily for us it will never be one. But like human beings, it can be flawed and we need to be transparent about that in order to trust it. Any outcomes that are produced by AI need not be taken as 100% true. They are mere suggestions that we can embed into our business or every day practices but with an understanding that they may be biased, and they may fail.
At the end of the day, AI is here to stay and if we do not build trust with its users, it will be developed in silos and applied in AI bubbles. Partial attempts towards transparency will not suffice. Trust will come with full disclosure, and tech giants will have to lead or follow on this.
Trust will be build when data holders will learn to embed transparency in what data they use, what kind of algorithms they use, and what outcomes are being generated. Repeat.
Dr. Zara Nanu is CEO of Gapsquare, a company that helps you step into the future of FairPay. Gapsquare software is transparent when it comes to the use of data and algorithms and provides users with downloadable explanations about it’s analysis models.
Really interesting piece, Zara. We have a client in Med Tech that believes the nomenclature ‘Artificial Intelligence’ is half the problem with public trust and that ‘machine learning’ is often more helpful when it comes to the application of the tech and how it can help clinicians and patients. Particularly like your piece because, working in tech, we all have quite an evolved view of any benefits, but lots of members of the public simply hear the fantastical element.
Business/Tech Strategy in the messy reality. Been there, can help. Finding the path from here to there, and it really can be done. Custom offshore software development and engineering. CIO100 panel 2025.
5 年Agree with these points.? I guess the problems we continue to face is around what we do with that transparency, since auditing ML decisions is decidedly tricky.? This is a good article on the subject -?https://hbr.org/2018/11/why-we-need-to-audit-algorithms