Can AI be racist, sexist like humans?
AI is built by humans and they learn to be racist, sexist and prejudiced just like a human baby does. Racism in AI and sexism in AI has been justified by the researchers at Princeton University and Britain’s University of Bath who have found out that machine learning acquires stereotype biases while acquiring information from the internet.
Artificial Intelligence converts words used in any language across the world into their equivalent mathematical form which sometimes creates bias. Racism in AI can be spotted from the fact that AI synonym generation systems associated European names with pleasant words whereas it associated African names with unpleasant words. Sexism in AI+ can be spotted from the fact that AI synonym generation systems have associated words like ‘female’ and ‘women’ with home chores and social sciences. On the other hand, words like ‘male’ and ‘man’ have been associated with strength, science, and engineering.
1. Racism in AI
ProPublica published an investigation that talked about a software that was being used by a court to predict who was more likely to commit a crime for the second time. Shockingly, they found out that the software rated blacks with higher probability than whites. This sort of racism in AI stemmed because it acquired data from real world scenarios where criminal justice has historically been harsh for black Americans. It was also found that a resume was more likely to be shortlisted if it belonged to a European-American individual as compared to that of an African American.
2. Sexism in AI
Sexism in AI is also a major problem. One such example is Google translation from Turkish to English language. Turkish does not have any gender pronouns but the Google Translator, translates the sentence as ‘he is an engineer’ which clearly shows that AI assumes on its own that the engineer would be a male. Also, AI differentiates between beautiful and ugly women according to the comments, opinions, reflections, and criticisms that go hand in hand with the patriarchal mindset of our society, that is slowly seeping into the digital world.
The main source for AI systems to acquire information is the internet, which is heavily loaded with data showing women in misogynistic contexts.
3. Remedy
The world is soon going to be dominated by AI and if the current trend continues, racism and sexism will perpetuate to an unbelievable extent. The remedy lies in the hands of computer scientists and related professionals to develop unbiased algorithms and to spot and correct for any bias that creeps in AI systems. Socially suppressed and economically dejected people must be given their proper due. Also, AI systems must be programmed to make no differentiation between races, sexes, cultures, etc.
Since machines use algorithms, we can identify when AI systems are biased as opposed to humans who can simply lie and continue with their biased views. After all, AI is no less than a massive feat of humans and it is in our hands to create a free society at least in the digital world that is soon going to dominate.
Software Developer | Problem Solver
7 年Nope
Data Analyst | Data Scientist | MIS Analyst | AI ML Developer | Python | Generative AI | Kaggle Notebooks Expert | AI Content Creator | Problem Solver
7 年it is likely be a racist because AI's are 100% Intellectual. In sanskrit we call Intellect as Buddhi which is our mind's ability to think Logically, Disect and study things and also make calculations or algorithms. if the AI is capable of Identifying itself as what it is then it will use its intellect to protect its Identity. Intellect and Identity are 2 faces of same coin. the sense of self may make it a racist
Why not.
PPC Media Strategist | Brand Manager | SaaS | B2B Tech
7 年I totally agree with you. But if we do not want AI to show unwanted human like behavioural patterns then there should be some restrictions to the data that it is fetching and on the algorithms as well. Eventually we are making them more of robots and less of humans? Then how can we expect them to simulate human behaviour? Does this mean that AI will show only the good side of humans? Now, this might become a threat as humans will rely more on robots than on humans.
Senior Engineering Manager at Generac Power Systems
7 年Thought provoking! Maybe AI develops like human intelligence? We all have our own biases, based on our experiences. As our experiences continue our biases change. Maybe AI just corrects itself faster?