Bias in Artificial Intelligence Classifications
AI Systems As Gatekeepers

Bias in Artificial Intelligence Classifications

Artificial intelligence technology is developing at an accelerated pace, presenting interesting and intriguing applications in industries like medicine, recruitment, transportation, and environmental management.

As exciting as it is, it is not without challenges, one of the major ones being bias. Bias in AI technology is a significant issue, as it can perpetuate and amplify existing societal prejudices and inequalities.

In 2023, the EU released the Artificial Intelligence Act - A document looking into regulating the responsible use of AI without hampering technological innovation. Prohibited Practices: The Act talks about prohibiting harmful practices and monitoring how AI tools are used in human resources, loan applications, and other fields where bias can come into play to avoid discrimination.

A great example of a biased AI was Amazon's Recruitment System just a few years ago. The system was eventually scrapped because of its biased decision-making that rated women applicants lower than men, filtering applications from women and selecting mainly the male applicants for further consideration.

A woman crying after not being selected for a job

There are endless examples of such biased systems in online advertising, banking, healthcare, credit applications etc. The problem is well documented and understood, so much so that AI itself is being called "potentially biased" or "outright biased."

AI is not biased - people are biased

AI is just mathematics and code; it has no feelings or ideas. AI is trained on data that is provided to it by people. In the case of the Amazon HR Bot - the bot was trained with historical data from previous hiring decisions that were made by people.

Say 10 CVs were submitted, 7 men and 3 women - the recruiter picked 2 men to proceed to the next step - this process continues and is reflected in the thousands and thousands of job applications. If this data is fed to an AI for training, AI is intelligent enough to pick up on these trends and conclude - this company wants to hire men.

Whatever bias existed in people when making decisions will only be amplified when we give that data to AI to automate that process. AI will do what you show it through training.

What to do then going forward - Redress?

Personally, I would just scrap any classification algorithms that are based on previous data or probability. For example, I have long been a victim of Google and other social media flagging my online accounts as a scam. In some cases - I get no opportunity to present my case. The automated systems simply look at me - an African Black Woman and what I do, and conclude that it is not probable or likely based on historical data provided to it, hence it's an outlier and flagged as a scam.

EU Artificial Intelligence Act

The EU Act continues to talk about Redress Mechanisms, citing that stronger consumer rights and effective remedies, including collective redress mechanisms, should be established. These would allow individuals to challenge decisions made by AI systems that may adversely impact them, such as being unfairly denied a job opportunity or a loan. I would like to see that happen - but this is after the fact and after the struggle and fight.

Why should under-represented groups have to deal with this at all? When it is well documented that automated algorithms are biased and also well documented why that is so, why are they still being used at all?

AI Robot

Multinationals want to grow internationally, but cannot afford the staff to properly implement the required checks and balances on their systems, so they hand over really important decision-making that affects people's lives to AIs and algorithms that are well known to be discriminatory in nature. This should not be allowed.


Vishal Anand ????

Leading Unstoppable Business Transformation with AI ?? | Trusted Advisor to Government Agencies & Global Corporations ?? | Top Voice & Most Sought-After Name in Management Consulting ????

1 年

An insightful read, Bertha! Your exploration of bias in AI classifications is a crucial topic in today's tech landscape. We must continuously strive for fairness and transparency in AI systems to mitigate these biases. Your article provides a valuable perspective on how we can approach these challenges. #AIethics #ArtificialIntelligence #MachineLearningBias

回复
Stanley Russel

??? Engineer & Manufacturer ?? | Internet Bonding routers to Video Servers | Network equipment production | ISP Independent IP address provider | Customized Packet level Encryption & Security ?? | On-premises Cloud ?

1 年

The persistent issue of bias in Artificial Intelligence classification models, tools, and systems raises crucial ethical questions about fairness and inclusivity. Despite extensive documentation and recognition of bias in these tools, their continued use prompts reflection on the challenges in addressing and rectifying these systemic issues. In the pursuit of ethical AI, how can we effectively mitigate biases, ensuring equitable representation for under-represented groups? Additionally, what proactive measures do you believe should be taken to foster transparency and accountability in the development and deployment of AI systems? Share your perspectives on the ongoing struggle against bias in AI and potential avenues for positive change.

回复

要查看或添加评论,请登录

Bertha ??????? Kgokong, MBA的更多文章

社区洞察

其他会员也浏览了