Does artificial intelligence have a bias problem?

Does artificial intelligence have a bias problem?


Artificial intelligence (AI) has experienced a quantum leap in the past year, with a resume that includes automating software development and generating art in mere seconds. And the tech only seems to be gaining momentum. Following the feverish success of ChatGPT, OpenAI launched the latest version of its AI-powered chatbot, GPT-4 , back in March.

However, not everyone is eagerly awaiting the next iteration. An open letter signed by industry leaders demanded a slowdown in generative AI, while last month Italy took the drastic step of banning ChatGPT due to data privacy concerns. And as we begin to learn more about the tech, another issue is becoming increasingly apparent – bias.

·??????Generative AI tools like ChatGPT draw from billions of data points to form an answer – potentially propagating biased views on an unprecedented level.

·??????One study found that 97% of DALLE-2’s images showing positions of authority depicted white men.

·??????A World Economic Forum report revealed that only 22% of AI professionals across the globe were women.

How bias can creep into AI

The issue of bias in computer programs is not an entirely new one; algorithms used to aid human decision-making have perpetuated prejudicial behaviours long before the advent of ChatGPT. In 1988, the UK Commission for Racial Equality found a British medical school guilty of algorithmic discrimination . The computer program they had used – hoped to reduce the work of selecting candidates for interview – was judged to be biased against both women and applicants with non-European names.

Over three decades later, the technology behind these algorithms has become vastly more complex. With generative AI tools like ChatGPT – and Google’s response, Bard – working off billions of data points, large language models are capable of scaling existing bias es on an unprecedented level.

Generative AI tools scrape the internet for enormous amounts of data when informing their responses. To improve the accuracy and sophistication of these answers, the technology is pre-trained and fine-tuned. But it’s an imperfect process. OpenAI says this is more akin to “training a dog” than traditional programming. Unlike your typical Labrador though, the technology is permeated by data that represents inherently biased or inaccurate viewpoints. And sometimes, it’s the lack of data that proves problematic, causing inaccurate – and potentially prejudicial – decision-making.

What bias in AI looks like

We know then that bias exists in AI, but what does this actually look like in practice? Multi-faceted – as is the case with bias outside the digital sphere. Content generated by AI may rely on data associations that display racial and gender bias , or propagate stereotypes and other narrow-minded viewpoints. This has been identified in OpenAI’s image generator, DALLE-2, where one study found that 97% of images showing positions of authority – such as C-suit job titles – depicted white men.

AI-governed decision-making processes can also be compromised. In its recently published AI whitepaper , the government highlighted how AI concerns could extend to assessing the worthiness of loan or mortgage applications. And with algorithms pulling the strings across more aspects of our daily lives – including facial recognition – AI-fuelled bias could become increasingly invasive and damaging.

So, does AI have a bias problem? Almost certainly, but the answer may be a reflection of existing issues in our society: namely, underrepresentation in technology.

Is it possible to cleanse bias from AI?

Truly removing bias from technology means eliminating it on a human level – something easier said than done. A World Economic Forum report revealed that only 22% of AI professionals across the globe were women, symbolic of the wider gender gap in tech . If AI tools are designed by an unrepresentative body of people, the end product will miss out on crucial input and questioning that makes the product more representative of society as a whole.

The significance of diversity in AI is well documented. In its review of bias in algorithmic decision-making , the government recommends improving diversity across a range of roles involved in tech development, while a Deloitte report found that a more diverse workforce is better equipped to identify and remove AI biases. Together with the right training, it’s even possible that AI itself can be used as a tool help eliminate bias and create fairer decision-making processes.

In order to change the narrative of AI bias and break the self-perpetuating cycle, more needs to be done to create an inclusive technology workforce – from supporting STEM education for girls, to showcasing AI role models and trailblazers from unrepresented groups. Generative AI is moving at breakneck speeds, but the tech industry’s standards of equality must catch up first.

Today marks International Girls in ICT Day 2023 , with the theme for this year being “Digital Skills for Life”. It’s an opportunity to inspire and encourage girls to pursue a future in ICT, empowering them with the necessary skills, confidence and support to achieve their goals.

Anas Anchillath

People vs Technology,

1 年

Great article - Mr.Tim , I feel there have unminded issues on AI would not have emotional intelligences. Additionally AI make changes in each job and industry it is Globally interlinked platform No time horizontal, No cultural shock.

回复
Javier Martín Arroyal

● Consultoría - Ventas ● Desarrollo de Negocio ● Análisis y Estrategia ● MBA IE Madrid ● Gestión

1 年

Spite i belive in tech, there are a lot of circunstances around the AI to be bias. Just we need?a person -- with his/her "circunstances" -- in front a computer and check the result. We are not machines.

回复
Bj?rn Lorenz

Analytische Chemie, GMP & QC | Analytisch denkend und gewissenhaft

1 年

I believe that AI is prone to bias. The challenge is how to ensure that the AI is making correct evaluations. For instance, if the AI is comparing the number of sources supporting A versus B, B could still be accurate, but how can the AI determine that? Another factor to consider is the AI's programming. If the AI is not limited by boundaries, it could potentially pose a threat to humanity. However, when the AI is programmed with boundaries, the personnel responsible for setting those boundaries may be biased themselves. I would expect that AI developed in countries with different cultures would also produce divergent results due to the cultural differences and worldviews present in these countries.

回复
Jorge Melo Gomes

Strategy | Product | Technology | Science

1 年

are Humans biased?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了