Before We Move Forward with AI, We Must Address Diversity and Inherited Bias

Before We Move Forward with AI, We Must Address Diversity and Inherited Bias

Venture capital funding for AI startups reached record levels in recent years, increasing 72% compared to 2017 to $9.33bn in funding. Active AI startups in the US increased 113% from 2015 to 2018. As more money and resources are invested into #AI, companies have the opportunity to address the crisis as it unfolds, said Tess Posner, the chief executive officer of AI4ALL, a not-for-profit that works to increase diversity in the AI field. But despite an industry-wide push towards diversity & inclusion (D&I), female representation in the AI sector continues to be below average. A report by the AI Now Institute, New York University, revealed a glaring diversity gap in the field of artificial intelligence, which will affect how these systems approach data and make decisions. Even progressive companies such as Facebook and Google are lagging when it comes to AI #diversity. Only 15% of AI research staff at Facebook comprises women, and the number is even lower at Google (10%). There is significant variance among women of different ethnic backgrounds with Caucasian women still preferred over those of other minorities. The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI.

Diversity in AI Development?

As AI transforms how you manage your workforce and complete day-to-day tasks, it is essential to keep an eye on the possibility of bias at every level. AI requires a balance with human intelligence, but this human intelligence needs gender, cultural and racial diversity to create a solution that can consider multiple factors when making important decisions. The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

An experiment undertaken earlier this year at the Massachusetts Institute of Technology (MIT), for example, involved testing three commercially available face-recognition systems, developed by Microsoft, IBM and the Chinese firm Megvii. The results found that the systems correctly identified the gender of white men 99% of the time, but this success rate plummeted to 35% for black women. The same was true of Amazon's recognition software, misidentifying 28 members of US Congress as criminals.

Biased AI is Built on Biased Data

Over 70% of all computer programmers are white males and despite the best attempts at neutrality, we were raised in a society that inherently devalues women and people of color (POC), teaching us both explicitly and implicitly that they are less capable than white men. This colors our worldview and in turn, the technology we create; we aren’t necessarily actively misogynistic or racist but our environment allows us to perpetuate the biases ingrained in us by society unchallenged.

 Amazon’s controversial Rekognition facial recognition AI struggled with dark-skin females in particular, although separate analysis has found other AIs also face such difficulties with non-white males. Amazon had to scrap a four-year-old recruitment matching tool because it had taught itself to favor male applicants over female ones. Equally qualified female candidates were ranked lower than their male counterparts, with some graduates of all-female colleges losing whole points due to their alma mater. The system was trained on data submitted by applicants over a 10-year period, who were overwhelmingly male (73% of Amazon’s leadership is male). Despite the company building the technology to be neutral, it still taught itself to be biased based on the data it was given by the people who built it, which reflected their reality - a (majority white) male-dominated industry.

 What Can We All Do to Eliminate Bias When We are Building AI

The report cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually.

The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels.

What about some government bills? Some additional efforts to increase transparency around how algorithms are built and how they work may be necessary to fix the diversity problems in AI. In April 2019, the US senators Cory Booker and Ron Wyden introduced the Algorithmic Accountability Act, a bill that would require algorithms used by companies that make more than $50m per year or hold information on at least 1 million users to be evaluated for biases.

In addition, we all can take the following steps:

o  Enhancing Awareness in the Workplace and Beyond -- to shift the internal and public mindsets from a gender-biased industry to a more gender-neutral one. That way, the tech industry can become more appealing to all genders and deconstruct the prejudices that once shaped the sector.

o  Develop Innovative AI software that can, for example, generate bias-free salary suggestions and projections based on a number of crucial factors, none of which take the employees’ look, ethnicity, or gender into account. Instead, the software will analyze variables such as education, certifications, experience, and performance to generate salary suggestions, as well as suggestions regarding bonuses and promotions.

o  The discrimination feedback loop into AI to ensure any bias is fixed to improve any future results.

Path Forward

The problem of AI performing poorly with certain groups could be fixed if a more diverse set of eyeballs was involved in the technology’s development. And while tech companies say they are aware of the problem, they haven’t done much to fix it. Data collection and preparation should be done by the team with diversified experience, backgrounds, ethnicity, race, age, and viewpoints. The view of someone from a less developed or developing country in Asia is going to be different than the view of someone from a Western country. An illustrative example was a robotic vacuum cleaner in South Korea that sucked the hair of a woman sleeping on the floor. The non-diverse team involved in the training data collection did not anticipate or consider the scenarios of people sleeping on the floor, although it is very common in some cultures.

Another important type of diversity is intellectual diversity. This includes academic discipline, risk tolerance, political perspective, collaboration style -- any of the individual characteristics which make us all unique. This type of diversity is known to enhance creativity and productivity growth, but it also improves the likelihood of detecting and correcting bias. Intellectual diversity can even exist within a single human who has developed a multidisciplinary background and experiences dealing with a broad range of people. The value of such people will increase as AI continues to affect a great range of ventures.

The objective should not be to simply diversify the privileged class of technical workers engaged in developing AI systems in the hope that this will result in greater equity. Nor should it be to develop bespoke technical approaches to systemic problems of bias and error, hoping that others won’t come along. Instead, by broadening our frame of reference and integrating both social and technical approaches, we can begin to chart a better path forward.

We must make conscious decisions to elevate the POC and women around us to roles where they are part of the decision making the process. We have to listen when they tell us about the ways our privilege is clouding our judgment and advocate for and work with them to fix the issues. We need to make sure our hiring strategies are deliberately diverse because right now, they’re passively biased and it’s not helping anyone.

Upskill the workforce with the knowledge of how AI works. This will allow employees to spot any instance of bias due to the lack of AI diversity and promptly address it. Without internal capabilities, issues like this could continue to be overlooked, perpetuating the adverse effects of low diversity in the AI sector. 


Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

7 个月
Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

7 个月

It was written 4!!! years ago and we are only now start looking at how these issues can be addressed.

Brian Walsh

Marketing Leader, IoT/Climate Tech | Achievements in marketing, strategy, content creation, sales enablement | Passionate advocate for IoT’s potential to transform environmental, societal, and economic sustainability

4 年

Good article Eugina. IBM also apparently agrees that AI/Facial Rec isn't ready for primetime, especially due to the unintended consequence of its abuse by law enforcement. They just announced that they are exiting facial recognition software biz because police aren't using it responsibly. https://telecoms.com/504852/ibm-exiting-facial-recognition-as-police-cant-use-it-responsibly/

Eugina Jordan

CMO to Watch 2024 I Speaker | 3x award-winning Author UNLIMITED I 12 patents I AI Trailblazer Award Winner I Gen AI for Business

4 年
Jacey Godfrey

Marketing Expert | Event Strategist | Field & Brand Marketer | Team Developer

4 年

Excellent article and insight, Eugina Jordan. Rudina Seseri I thought you might like this piece by our VP of Marketing, Eugina Jordan.

要查看或添加评论,请登录

Eugina Jordan的更多文章

  • Gen AI for Business Weekly Newsletter # 32 (Thanksgiving edition)

    Gen AI for Business Weekly Newsletter # 32 (Thanksgiving edition)

    As we dive into the season of gratitude, let’s give thanks for the gift that keeps on giving: Generative AI. This…

    7 条评论
  • Gen AI for Business Newsletter Edition # 31

    Gen AI for Business Newsletter Edition # 31

    Welcome to another edition of Gen AI for Business! This week, we dive into the latest tools, strategies, and real-world…

    26 条评论
  • Gen AI for Business Weekly Newsletter # 30

    Gen AI for Business Weekly Newsletter # 30

    Welcome to the 30th edition of Gen AI for Business, where I bring you the latest insights, tools, and strategies on how…

    12 条评论
  • Gen AI for Business Weekly Newsletter # 29

    Gen AI for Business Weekly Newsletter # 29

    Welcome to Gen AI for Business #29, your go-to source for insights, tools, and innovations in Generative AI for the B2B…

    10 条评论
  • Gen AI for Business Newsletter # 28

    Gen AI for Business Newsletter # 28

    Gen AI for Business # 28 newsletter covers key insights and tools on Generative AI for business, including the latest…

    28 条评论
  • Gen AI for Business Weekly Newsletter # 27

    Gen AI for Business Weekly Newsletter # 27

    October 20 newsletter Welcome to Gen AI for Business weekly newsletter #27. We bring you key insights and tools on…

    17 条评论
  • Gen AI for business newsletter # 26

    Gen AI for business newsletter # 26

    Welcome to Gen AI for Business weekly newsletter # 26. We’re back with the latest on all things Gen AI, from…

    11 条评论
  • Gen AI for Business Newsletter, edition #25

    Gen AI for Business Newsletter, edition #25

    October 6 newsletter Welcome to the 25th edition of Gen AI for Business! I am so grateful and thankful for each of…

    32 条评论
  • Gen AI for Business Newsletter # 24

    Gen AI for Business Newsletter # 24

    September 29 newsletter Welcome to Gen AI for Business #24, where we dive into the latest breakthroughs, strategies…

    4 条评论
  • Gen AI for Business # 23

    Gen AI for Business # 23

    Welcome to Gen AI for Business newsletter #23, where we dive into the latest generative AI news, trends, strategies…

    28 条评论

社区洞察

其他会员也浏览了