Bias in Technology: 7 Ways to Address it in Your Organization
Christie Lindor
Founder, Tessi Consulting. Executive Coach. Author. TEDx and Keynote Speaker.
I recently conducted a micro experiment that showcased cognitive bias in technology — the thing is, this experiment was conducted by accident. As a woman of color having my first child, I did a number of online searches throughout my pregnancy journey. I made a couple of observations when I typed in keywords “pregnant woman” and looked at the Google Images (which according to wordtracker.com is a common key term searched at least 55,800 times globally on a monthly basis). Most pictures that came up had typical website headers such as “Learn how to recognize labor pains”, “Pregnant women who snore”, or “Job protection for mothers and pregnant women”. Basic pregnancy journey stuff. What was interesting is that 99% of the images displayed were of Caucasian pregnant women associated with these types of headlines. So I then decided to type in “pregnant black woman” — a term searched globally 2,563 times on a monthly basis- into Google Images to see if there were any additional information. After all, I didn’t see any images in the original search that looked like me. All of the pictures that came up of Black pregnant women were with headers such as “Childbirth is killing Black women”, “Black women are at high risk of….” “Black women who die during pregnancy”.
This accidental discovery is one example of an ongoing trend in IT. Biases in the workplace continue to surface in the daily lives of consumers unbeknownst to the technologist or the company responsible.
Recently an MIT researcher made headlines for calling out major corporations for commercially producing facial recognition software that wasn’t technically capable of recognizing her face due to her darker skin hue. She has to wear a white mask for the software to recognize her face, which she discovered accidentally while conducting academic research.
In the past, Google had to apologize for its image recognition software mislabeling people with darker skin as gorillas…also accidentally discovered upon usage by consumers.
This accidental discovery is one example of an ongoing trend in IT. Biases in the workplace continue to surface in the daily lives of consumers unbeknownst to the technologist or the company responsible.
Recently an MIT researcher made headlines for calling out major corporations for commercially producing facial recognition software that wasn’t technically capable of recognizing her face due to her darker skin hue. She has to wear a white mask for the software to recognize her face, which she discovered accidentally while conducting academic research.
In the past, Google had to apologize for its image recognition software mislabeling people with darker skin as gorillas…also accidentally discovered upon usage by consumers.
Or in 2017, when photo filter company Faceapp came under fire for its “hot” filter, which essentially was whitening pictures of darker skinned users. In the immediate aftermath of this accidental discovery, FaceApp kept the filter but renamed it from “hot” to “spark”. Faceapp was in the news again last month for Asian, Black, Caucasian, and Indian filters that changed the ethnicity of the app user. The company was surprised about this backlash from its user community.
Cognitive bias, particularly in technology, runs deeper than surface level interactions with colleagues in the workplace. The end result of bias impacts more than just an individual employee’s day to day experience in an organization. It is a systemic mindset that if left unchecked can permeate through the delivery of everyday products introduced to the broader marketplace. Workplace bias also results in perpetuating negative stereotypes, biased worldviews, and perspectives that can have a lasting impact on technological solutions of the future and the broader society.
As Joy Buolamwini points out in her Ted Talk on bias in technology, “algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace. Algorithmic bias can also lead to exclusionary experiences and discriminatory practices.”
Wired magazine further states, “ the oft-touted solution is to ensure that humans train the systems with unbiased data, meaning that humans need to avoid bias themselves. But that would mean tech companies are training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it.”
What does this all mean for IT leaders? If your technology platforms, systems, and application programs have biases built-in, it doesn’t matter how much you and your organization preach embracing diversity and inclusion as a core value. Biases will eventually show up as surprising curveballs and blindspots you never expected or intended, which could impact your company’s reputation — and potentially the bottom line.
In a recent article, Boston Consulting Group highlighted the risk of biased data in AI. They shared, “if not carefully designed, AI applications can perpetuate and exacerbate bias. If an AI application is trained on data that is biased, the algorithms it develops will likely be biased, too.”
Here are 7 ways leaders can tackle bias head-on within their IT organizations:
Ensure your leadership team is aligned on creating inclusive technology solutions. It’s no surprise that leaders set the tone in an organization. Make sure there is alignment among the leadership team on the definition of cognitive biases that could impact technology development. Help leaders obtain a lens into how their individual micro habits can either reinforce or disrupt biased thinking across the organization.
Help technologists understand the impacts and nuances of cognitive biases and how it can show up in their work. Share varying definitions of biases with your technology teams alongside real world examples. Doing so will not only bring a heightened level of awareness, it will also increase the team’s ability to innovate and become resourceful during the ideation process.
Seek to diversify and curate a richer swath of datasets particularly for technology products and solutions being designed for human consumption. According to MIT Technology Review, bias can creep in long before the data is collected as well as at many other stages of the deep-learning process, making it challenging to determine its genesis. Ensuring there are varying nuances of the human race represented in datasets will go a long way in reducing bias.
Embed bias health checks during coding and application development sprints. Regardless of the software or application development lifecycle your company chooses to use, incorporate bias health checks. Cross check use cases for inclusive scenarios. Develop test scripts that represent nuanced usage of a solution. Seek out a diverse product beta testers. Embed bias checks as part of the sign off or stage gate process.
Partner with IT product managers to ensure inclusive practices are incorporated in the product management lifecycle. Its equally important that wraparound processes from product planning & design to product release cycles also take into account bias checks. Executing an integrated approach to reducing bias before products are introduced to beta users or the marketplace can make a big difference in moving the needle and identifying blindspots that might have been missed in the software development lifecycle.
Incorporate ongoing bias workshops as part of your technologist learning journey. Developing or rolling out “mandatory, one and done” diversity or inclusion workshops rarely work. Creating a robust inclusion program linked to IT strategy with ongoing messaging to drive awareness throughout the employee lifecycle is critical to create habits and cultural norms.
Seek to hire, develop, and retain diverse technologists within your organization. Proactively recruit and retain diverse technologists and ensure they have a voice at the table during key junctures in your development lifecycle and code curation process.
Conclusion
Recognizing biases being developed in technology solutions is a critical enabler in the future of work. In doing so, managing bias not only helps create inclusive work cultures, it can fundamentally become a superpower that differentiates your technology solutions in the marketplace.
While it is virtually impossible to root out all human biases in technology solutions, tackling it head on is a major step forward. IT leaders can proactively address bias in incremental, agile steps. This first step is to enable their teams to create micro habits that over time become cultural norms into technology processes. Doing so can make a big difference in reducing bias, providing more equitable, inclusive products offered to clients while positively impacting the broader society.
This article originally appeared on the Slalom Technology Blog.
About Christie Lindor
Christie is a seasoned management consultant, trainer, podcaster, and author with 19 years experience advising global to mid sized clients in transforming their businesses in times of disruptive change. She is being touted as a rising authoritative figure in redefining the modern day workplace. She has worked for top consulting firms like IBM, Deloitte Consulting, and EY.
She is currently a Solution Principal at Slalom Consulting.
Christie is also a TEDx speaker who has been featured in TIME, Forbes, Fast Company, Inc. Magazine, Bustle, GirlBoss, and dozens more on a wide variety of future of work topics. She is author of the award-winning, Amazon bestselling book The MECE Muse: 100+ selected practices, unwritten rules, and habits of great consultants.
Head of Talent, Learning and Culture @ MannKind Corporation | Cultivating Effective Leaders for Organizational Success
5 年This was the most interesting thing I’ve read today Christie - thank you!
SMS Marketing
5 年Really nicely laid out action steps, Christie. Great article:)