Why people will be the greatest challenge to data science

Why people will be the greatest challenge to data science

Few trends in technology are currently gaining more attention than Artificial Intelligence. And while its roots can be traced back to ancient Greece with myths like that of Pygmalion, which incorporated the idea of an intelligent robot, it has made substantial progress in the last couple of years.

AI has changed our daily lives in subtle but comprehensive ways – just think of intelligent personal assistants like Siri, Alexa, Cortana and Google Assistant, cars that support you in lane keeping or maneuvering in and out of tight parking spots, or chat bots on websites that answer any questions you might have.

No doubt: AI is moving fast – and everywhere. One development I am really following with interest is the topic of ethics and biased data in AI.

Consumers today are making carefully considered choices to buy from companies that stand for purposes, values, and beliefs they can identify with. Companies are reacting to it by making values a focus of their AI development for example by establishing ethics boards and guidelines and investing in topics like research on algorithmic bias.

But businesses being held more accountable than ever for what they do and how they behave are only one part of the equation.

How AI can help – or hinder

?AI is the broader term for applications that involves machines performing human-like tasks. The general idea is to converge machines and significant functions of the human brain: learning, reasoning, and problem solving. So basically, AI mirrors human behavior – oftentimes including biases.

Biased data is becoming increasingly important as AI and machine learning models are used for decision-making such as hiring, loans, or disease diagnosis. These models are based on algorithms – shortcuts people use to tell computers what to do.

Then again, algorithms never think for themselves. They rely on humans to do the thinking for them, and AI is only as good as the data it is trained to analyze – because no matter how large the dataset, it will be fundamentally flawed if this data is incomplete, doesn’t include data on certain groups, or reinforces stereotypes.

As an example, earlier this year, it was reported that a big tech company discarded its e-recruiting platform because it showed bias against women. The models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. As most came from men, the AI engine developed bias against female candidates.

Gartner predicts that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. Some impacts of AI are bigger than others, of course: Models that determine outcomes such as loans or hiring opportunities deserve a more critical observation than those that help personal assistants answer questions about a restaurant nearby or the weather.

Bias doesn’t come from AI algorithms, it comes from people

Talking about personal assistants, a new UN study shows that they reinforce gender bias. According to the report, “Siri’s ‘female’ obsequiousness – and the servility expressed by so many other digital assistants projected as young women – provides a powerful illustration of gender biases coded into technology products (…).”

The study identifies programming as the main factor, with most of the programmers being white and male. Indeed, women are extremely under-represented in teams developing AI tools: According to the World Economic Forum’s latest Global Gender Gap Report, only 22% of AI professionals globally are female, compared to 78% who are male. 

Representation among other minorities is even lower. In April, The Guardian reported that only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%, and little data exists on trans workers or other gender minorities in the AI field.

Sadly, the makeup of the AI field is only reflective of the larger issue we continue to see across computer science and Stem fields. And while there are many initiatives and programs going on that focus on closing this gender gap and fuel the next generation of female innovators, I believe we should better tackle this lack of diversity sooner than later.

AI and machine learning are no longer confined to pure tech; they now have an impact on entire businesses and the whole of society. As AI becomes increasingly integrated into society, the urgency is increasing. While the issue might be hard to fix, many tech companies have recognized it and can build on existing progress in reducing bias and discrimination. Because biases can be understood and managed—if we learn to acknowledge them. 

Nick T.

Co-Founder at W3D Technologies Inc.

4 年

"..Gartner predicts that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.." All AI projects are biased by design, build and use, in my experience. Low data value is a significant source of bias for teams.. Today teams are a fusion of people, algorithms and platforms, all require bias. Bias is like risk, I think Bernd Leukert No risk, no value. Monitoring value contributions of risk and bias offers a clear view of waste and unique opportunities to value by design as observed by Bruce Mau Cheers!

回复
Harald Mueller

Chief Technology Officer, Digital Supply Chain at SAP

5 年

I'd say that today neither machines nor algorithms are biased. Their bias comes through environment/data and experience caused by humans. Today we can still influence what and how machines learn and thus it is up to us to define their bias. Similarly this applies to humankind but we additionally have the challenge of epigenetic inheritance. But once machines decide what and how they learn and evolve then we are seriously in trouble. We better define clear guardrails before that is happening...?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了