When Bytes Bias: Unraveling the Myth of Neutral AI
Bernard Marr
?? Internationally Best-selling #Author?? #KeynoteSpeaker?? #Futurist?? #Business, #Tech & #Strategy Advisor
Thank you for reading my latest article?When Bytes Bias: Unraveling the Myth of Neutral AI.?Here at?LinkedIn?and at?Forbes?I regularly write about management and technology trends.
To read my future articles simply join my network by clicking 'Follow'. Also feel free to connect with me via?Twitter,?Facebook,?Instagram,?Podcast?or?YouTube.
---------------------------------------------------------------------------------------------------------------
With all of the excitement around artificial intelligence (AI) and digital transformation, are we too quick to assume that the answer to all of our problems lies in technology?
And more importantly, are we so enthusiastic about turning towards technology -particularly AI – that we overlook the societal and human problems it can cause?
This is the argument made by Meredith Broussard in her latest book More Than a Glitch – Confronting Race, Gender and Ability Bias in Tech.
The book is the latest of a number of recent investigations into issues around bias and the wider social implications of our rush to embrace AI, joining other important works such as Weapons of Math Destruction by Cathy O’Neil, Safiya Noble’s Algorithms of Oppression, and Broussard’s own Artificial Unintelligence.
Broussard recently joined me on my podcast to discuss some of the ideas she puts forward, as well as her advice for business leaders interested in working with AI or adopting it in their organizations.
At the heart of her argument is the concept of “technochauvinism” – a belief that technological solutions are always superior to social or other methods of driving change.
?
What is Technochauvinism?
In the book, Broussard refers to the example of the “stair climbing machine," often proposed by technologists and engineers as an innovation that could improve the lives of disabled people.
“Designers like to create things … because it’s cool – let’s engineer this novel solution.
“But if you actually ask somebody who uses a wheelchair … they will generally say no – ‘it looks scary.' ‘It doesn’t look like it’s going to work.' They will say, 'I'd rather have a ramp or an elevator.’
"Then you realize, there's this really simple solution that works really well, and we don't need to add in a lot of extreme computational technology; we can just build a ramp.
“So until we’ve made the world really accessible, let’s not overengineer the solutions.”
Broussard says that this concept – and many others like it, is an example of a “disability dongle." This is described succinctly in this blog post as an idea put forward by a (usually) able-bodied engineer that appeals to our love of a technological “quick fix” over the complex, structural, societal change that is really needed.
The counter to the techno-chauvinistic mindset, Broussard suggests, is often simply choosing the right tool for the job. While not always assuming that this is going to be the most advanced technology or the most sophisticated data-crunching algorithm.
Broussard tells me, "We kind of have this idea that somehow technology solutions are going to be superior to others. And this is itself a kind of bias … sometimes the right tool is something simple, like a book …it’s not a competition, one is not inherently better than the other.”
?
Mathematically and Socially Fair
Another fascinating idea Broussard explores is the difference between mathematical and social fairness. When we use computers to assist with challenges around equality and fairness, what we are most often presented with is a mathematical solution.
A simple explanation: “A story that I think illustrates this concept – it’s about a cookie. When I was little, my brother and I would argue about who gets the last cookie.”
Ask a computer to solve this simple but pressing problem, and there is one obvious answer – each kid gets half a cookie.
“But in the real world, when you split a cookie in half, what happens is you get a big half and a little half. And then we’d fight over who has the bigger half.”
The solution, she suggests, lies in socially-constructed negotiation and compromise.
“So, if I wanted the big half, I would say, you give me the big half, and I’ll let you choose the TV show we watch after dinner.
“Mathematically fair decisions and socially fair decisions are not the same … this explains why we run into problems when we try to make socially fair decisions with computers.”
The outcome of this is that we should use computers to solve mathematically-oriented problems and not rely on them too heavily when it comes to societal challenges.
?
领英推荐
AI and Human Jobs
A similar principle emerges when we think about the question of how computers will be used to replace human workers. As a writer and journalist, Broussard’s own profession is one that’s commonly regarded as being threatened by the emergence of applications like ChatGPT. After all, if they can quickly and easily generate articles, essays, and even entire books from a simple prompt, who needs authors?
However, as anyone who has tried to use ChatGPT to write a book or even an essay to any level of sophistication will quickly tell you, that threat has been somewhat overexaggerated.
Although initially impressive, AI-generated content still lacks many essential human qualities – most crucially, any real ability to generate new ideas or truly creative thoughts. This is because all it really does is regurgitate language and ideas found in its training data.
“If you’re the kind of person in a position to replace workers with generative AI, you’re in for a nasty shock,” Broussard tells me.
“AI is mediocre. Mediocre writing is absolutely useful for a lot of situations … and it seems like it’s going to be incredibly useful and flexible … one of the things you quickly realize when you use generative AI for a while is that it’s kind of boring … it just gives you the same thing over and over again … that’s not what you want to be giving your customers.”
Her thoughts echo my own beliefs that AI is not a replacement for creativity – it’s a tool that allows humans to enhance their own creative skills and become better organized in the ways that they put them to work.
The Dangers of AI
One aspect of AI that Broussard finds particularly worrying, however, is computer vision – and specifically, the way it differs in its treatment of people according to their race, gender and other factors.
"Facial recognition is biased based on skin tone," she tells me.
“It’s generally better at recognizing light skin than dark skin, better at men than women … it doesn’t recognize trans and non-binary folk at all.”
This has caused problems when AI-powered computer vision systems have been used for policing and facial recognition in public areas. In several cases, the use of the technology by police has been found to be unlawful and unethical, leading to its ban in some jurisdictions.
Broussard says, “We should not be using facial recognition at all in policing. It’s disproportionately weaponized against people of color and communities that are already disproportionately policed.
“We’re not going to achieve justice if we keep using these powerful technologies that work very poorly and have a disproportionate impact on certain groups.”
?
?More Important Than Fire?
?“AI is nifty, generative AI especially is a lot of fun to play with, but it's not going to transform the entire world. It's going to change a few things; it's not the invention of fire."
Broussard is alluding to comments made by Google CEO Sundar Pichai a few years back when he described AI as "more profound than fire or electricity or anything we've done in the past."
It’s a refreshingly down-to-earth counterpoint to the views I myself – someone who works closely with companies in the business of selling AI, as well as companies whose reputations are built around the changes it can achieve – often hear.
Personally, my own experience and observations lead me to be somewhat more excited and optimistic about the upside than Broussard herself is. But that doesn’t mean I am in any way less cautious or concerned about the downside.
Broussard points to the work of organizations, institutions and campaign groups, including the Algorithmic Justice League, Equal AI, and NYU's Center for Critical Race and Digital Studies, as voices that will play a crucial role in the ongoing development of AI.
Rounding off our conversation, she tells me, "The thing that concerns me is when the conversations about AI do not focus on the real harms being experienced by real people … because if you’re trying to, say, put in biometric locks on people’s apartments or office doors, people with darker skin are … not going to be able to get into their apartments or offices as easily as other people.
"And that seems discriminatory and unnecessary; why not just use a key?"
You can click here to see my conversation with Meredith Broussard, associate professor of data journalism at NYU and author of the books Artificial Unintelligence and More Than a Glitch – Confronting Race, Gender and Ability Bias in Tech.
---------------------------------------------------------------------------------------------------------------
About Bernard Marr
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a?best-selling and award-winning author of 22 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1.8 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.
Bernard’s latest books are ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’, ‘Future Skills: The 20 Skills and Competencies Everyone Needs To Succeed In A Digital World’ and ‘The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society’.
Netocrat | Orchestrating Change and International Collaboration | CIO Advisor at Kyiv City Council | Serving Connected Communities
1 年Bernard Marr excellent article. I recently watched a video where the author proposed that AI should develop the ability to experience pain and emotions to function on par with humans, even for basic tasks like driving. So, it seems we're safe... at least for now! ??
Adjunct Professor at Wilmington University
1 年Valuable insights and perspectives on an important topic.
Turning data into strategic information. With a very broad knowledge base I quickly find gaps and nuances in source data to extract the maximum ROI.
1 年We need to see more of this content as a alternative view to all the hype - we need to have balanced views of AI and technology. Balanced view lead to better progress and development at the end of the day!
Executive Chair CUUL | Tech Enthusiast| Mentor: MGA-GGBC-Mastercard Foundation| AV Global Intelligence member| Tech-Savvy
1 年Thanks, Bernard Marr for the wonderful insights. My view is that, in the 4IR innovations in technology including AI are inevitable because we are curious beings with an extreme appetite to cause impact in the increasingly interconnected world. The challenge is, at the preliminary stages, advancements in technology can not undergo a comprehensive peer review because the ideas could cease to be novel. However, using technology acceptance models and theories, researchers are able to trigger discussions like this one so as to create awareness of the benefits and the risks. Improvements can then be made to the innovation based on user feedback. By following a recursive process, it is then possible to foster a more inclusive and constructive dialogue that recognizes both the potential benefits and the challenges associated with technological advancements such as the AI revolution. The term "techno-chauvinism" is often used to highlight the negative aspects of excessive reliance on technology, but it does not imply that all enthusiasm or appreciation for technology is inherently problematic. It is more about recognizing the potential pitfalls of an uncritical and unwavering belief in the superiority of technology.
Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan
1 年Thanks for posting.