Algorithm Bias: AI's Achilles heel
Dr. Joe Buolamwini, Computer Scientist and Digital Activist at MIT Media Lab

Algorithm Bias: AI's Achilles heel

What is algorithm bias?

Dr. Joy Buolamwini, the ‘poet of code’, laid algorithm bias out like this: who codes matters, how we code matters, and why we code matters. She came to this conclusion the hard way. As part of an art project at MIT Media Lab called the Aspire Mirror she had users wear plain white masks as a kind of digital canvas that can have different faces projected on them using a webcam and a monitor. Depending on your mood you could sit in front of the ‘mirror’ and become anything you wanted. It’s a fun idea and something we see a lot of now with image filters that add ears, noses, or whatever takes your fancy.

However, Dr. Buolamwini had a problem. The generic facial recognition software she used was unable to recognise her face until she put on a white mask. The software knew what a face was but couldn’t recognise her features as such.

Dr. Buolamwini used this experience in her subsequent talks as an example of algorithm bias - an absence of essential information that can lead to “exclusionary experiences” and “discriminatory practices”.

Unfortunately, facial recognition software continues to be problematic, especially with the false identification of people of colour. This isn’t necessarily by design. Nobody sets out to create software with this critical flaw, it was most likely trained and tested using white faces, making accurate identification of different skin tones erratic and untrustworthy.

Algorithm Bias is a challenge to everyone working in AI

Let’s look at another example in one of the verticals we work in: Human Resources. In 2014 Amazon developed an AI tool to make it easier to hire employees by automating its candidate search. Based on ten years of CVs submitted to the company the tool was deployed around the web to search for suitable candidates to fill new jobs at the company using a star rating system. The results were shocking. Amazon couldn’t find a way to make the tool ignore gender as a factor and consequently, women working in technical roles were being downranked not for lack of experience but because there were fewer women working in those positions. The system wasn’t solving the problem of finding talent, it was replicating a problematic environment - a clear case of algorithm bias. The tool was abandoned in 2018.

Bias is a massive challenge to everyone working in AI. If a technology can’t be trusted it won’t be developed, so it’s essential to balance automation with human supervision, which is where our decision intelligence platform comes in.?

What is Decision Intelligence?

Decision Intelligence (DI) combines data analysis, artificial intelligence, and human judgment to optimise decision-making. It leverages predictive models, algorithms, and insights to drive informed choices, improve outcomes, and adapt to dynamic environments.

At Galiva, we include Explainable AI and Uncertainty Qualification as components of our Decision Intelligence platform to safeguard our client’s decision-making.

Explainable AI helps us understand how and why AI systems make decisions, fostering transparency, accountability, and trust in their outcomes.

While Uncertainty Quantification, measures and evaluates the uncertainty or lack of confidence in predictions, incorporating statistical analysis and risk assessment to ensure reliable decision-making despite variability, noise, and incomplete information.

Confused? Check out our full glossary of AI-related terms.?

Focused data sources

Boulamwini’s experience with algorithm bias revealed how a shallow data set could have devastating consequences. It’s a lesson Galvia addresses at the ideation stage of any project.

At the moment we work in three key verticals: education, human resources, and project management and there are important reasons why, one of which is that by narrowing the focus of the data we use to train our AI and automated features the more accurate the insights can be. By setting strict parameters we ensure the historical data we use is as uniform as possible. Our customisable dashboards give a ‘single pane of glass’ to show how your information is working. If there are anomalies in your data you’ll be able to see and address the issue accordingly. It could be a red flag of something going wrong, it could be a sign of poor data-gathering practices, or something more important. Our platform also grows with your operation, ensuring consistency regardless of your size.

Human supervision monitors the limits of usability

Every innovation has to be sanity tested. It’s obvious Amazon fell down in its hiring project by not recognising the tool’s failure quickly enough.

Decision intelligence is not about handing off significant decisions to AI, it’s about getting AI to work for you by removing basic tasks and providing insights to inform future action. We use automation to generate alerts and visualisations but higher-order decisions still require human action.?

Our project with the University of Galway saved 500 staff hours, with 84% of queries resolved instantly using our Cara chatbot. This freed up resources to look at higher-order problems such as well-being and mental health. We started the project knowing the areas that could be dealt with on the spot and those that required escalation. We could differentiate simple questions and answers such as ‘What time does the library open?’ from more serious issues like ‘I’ve been feeling down for a long time’.

Constant refining of the customer experience

AI relies on a constant flow of data to refine its customer experience. Whether it’s complementing the accuracy and speed of response with a paragraph of text or just an agent giving a simple thumbs-up or down emoji, it all goes back into the data used to train and refine the AI over time. We’re all about the customer experience as we know it encourages more engagement with brands, within companies, and with those looking for a simple piece of information. Every interaction has value and seeing how people interact with our chatbots gives us an opportunity to scan for bias and make sure the information we give is accurately evolving with our customers.

Talk to our team today to create an AI model that works for your organisation.

要查看或添加评论,请登录

Galvia AI的更多文章

社区洞察

其他会员也浏览了