Will AI/ML Built in Biases Be Overcome?
https://lnkd.in/eW8aTiV

Will AI/ML Built in Biases Be Overcome?


No alt text provided for this image

As Artificial Intelligence (AI) and Machine Learning (ML) permeate every facet of our lives, machines are coming closer and closer to acquiring human-like language capabilities. AI/ML operates on the recognition of human-like cognitive patterns that matches information from a stimulus with information from memory-recall. Research has uncovered that without intention business and moral decisions are being made based on deeply ingrained biases that are obscured within AI/ML learning and language patterns. 

Bias is naturally embedded in AI/ML systems

No alt text provided for this image

Machines are fed mounds and mounds of data to extrapolate, interpret and learn. Unlike humans, algorithms are ill-equipped to consciously counteract learned biases because although we would like to believe AI/ML correlates to human thinking, it really doesn’t. AI/ML has created what we have determined to be the newest industrial revolution by giving computers the ability to interpret human language and without intention, it has learned human biases as well.  

So, where does the data being used by AI/ML systems come from? Most of this historical data comes from the same type of people who created the algorithms and the programs using the algorithms which until recently has been those socioeconomically above average and male. So “without thinking” or intent, gender and racial biases have dominated the AI/ML learning process. An AI/ML system is not capable of “thinking on its feet” or reversing this bias once it makes a decision. The point is AI/ML systems are bias because humans are innately biased and AI/ML systems are not capable of moral decisions only humans are; at least not yet any way.

Research has shown recruiting (HR) software is biased

No alt text provided for this image

Much research shows that as machines are acquiring human-like language capabilities, they are also absorbing deeply ingrained human biases concealed within language patterns. Within recruiting (HR) selection software, this means a resume may not make the “first cut” based on the language and pattern recognition of the resume not based on the skills. As time passes, writing resumes has become an art and science; this alone is a skill belonging to a data scientist coupled with a professional writer; above all someone highly language educated with an analytical mind. How many professional writers are capable of being data scientists? Our educational system needs to address this because I believe that everyone will need to be a highly skilled data scientist or have access to one quickly and easily.  

Recent research has shown through implicit mathematical word association tests that categorize pleasant word versus unpleasant word associations, human psychological biases in AI/ML systems can be exposed. Words associated to “flowers” versus “insects” have been determined as psychologically more pleasant. Professional results for women are seen with gender biases through the words “female” and “woman” as associated with humanities professions and with the home. On the other hand, “male” and “man” algorithms result in associations with math, science and engineering professions. European American names perceiving to be more Anglo-Saxon were heavily associated with the words “gift” or “happy” while African American names were associated with unpleasant words.

Statistically, research shows that even with an identical coefficient of variation (CV) of 50% a European American is still more likely to be interviewed over an African American.

“The coefficient of variation (CV) represents the ratio of the standard deviation to the mean, and it is a useful statistic for comparing the degree of variation from one data series to another, even if the means are drastically different from one another.”

Because algorithms can potentially show when the algorithm is biased, it suggests that algorithms, explicitly inherit the same social prejudices as the humans who programmed them. It is believed that although a complicated task, it is possible that AI/ML systems can be programmed to address this mathematical bias. Correction is taking place already within companies like Google and Amazon search engines on the web. Machine translations of web searches construct mathematical representations of language in which the meaning of a word is refined into a series of numbers (word vector) based on which other words most frequently appear in correlation. This mathematical approach seems to capture the deep cultural societal language context more accurately over any possible dictionary definition.

Can Bias in AI/ML be eliminated?

No alt text provided for this image

How to eliminate inappropriate bias by modifying interpretation is not as easy as you can imagine. Language inference and interpretation is a human trait that is subtle. Frankly, it is typically based on influences like socioeconomic background, gender, education, race, making up human biases. How to program algorithms designed to “understand” language, without weakening their interpretive powers, is extremely challenging. To select “only one” most appropriate interpretation and adding it to the decision tree leading to the next “only one” most appropriate interpretation and so on down the decision tree; causes algorithms to mimic thinking. What if the first interpretation by AI/ML “goes down” what we humans believe is the wrong path based on human intellect, cultural and moral laws? Immediate course correction input would be necessary as data accumulates along the decision tree as very minute behavior steps are executed. How do we program in moral and cultural acceptable laws into AI/ML systems? Who decides what those moral and cultural laws are?

Amazon, Google, IBM, Microsoft and many others have been evaluating bias within its AI/ML platforms trying to understand both the problem and the solution. Amazon has even stopped using AI/ML software as a recruiting and employment tool. After many years of research what has been determined is that since AI/ML replicate patterns of male engineers who build the AI/ML software and systems, the patterns simulated are of their making. Most major companies are beginning to look at the biases their AI/ML systems have created and are trying to “find a cure”. 

One suggestion is to have extreme diversity in the AI/ML development team with constant diversity oversight from within. Also suggested is to create an AI/ML supervisory and compliance body to police the systems applying AI/ML diverse course corrections. This AI/ML body of humans would be extremely powerful when empowered and ultimately become our AI/ML moral authority. Are we entering into a science fiction novel similar to Orwell’s “1984”? Don’t you see evidence of a global race for a one-world economic and possible moral authority through AI/ML domination? Whoever dominates the creation and deployment of AI/ML platforms, could affect not only small global decisions but major ones as well.


Robin Austin

CTO/CISO @ Colliers Group

3 年

Thank you for reading my article. There are many points of positive implementation. We just need to incorporate all the possibilities because top of mind means inclusion

回复
Mark Williams

Insurance Law Specialist | Public Liability | Professional Indemnity | Life Insurance | Defamation Lawyer

5 年

I was just reading about AI and ML the other day on LinkedIn, though they had the opposite opinion! Great to get both sides.

回复

要查看或添加评论,请登录

Robin Austin的更多文章

社区洞察

其他会员也浏览了