The Equality Machine
Simha Chandra Rama Venkata J
Risk Management/ Business Analytics | Postgraduate Degree, Investment Banking & Data Analytics
The emergence of?intelligent machines triggers a need to uphold values of equity and fairness.
Over the past decade, discourse about technological change has been largely polarized.?Silicon Valley “insiders” — predominantly white men —?have viewed “disruption” as their key objective, lauding the?potential of new technologies to drive economic growth and?create efficiencies.?Meanwhile, “outsiders”?— such as people of color, women, and those from rural areas and the developing world — have issued warning cries about?potential new forms of exclusion and?inequities.?Thus, two dichotomous visions of the future have emerged: Outsiders worry that?new technology will create a dystopian “robopocalypse,” while insiders dream of an innovation utopia.
“Digitization and automation are here to stay, and engaging with all they entail by proposing positive uses, progressive improvements, creative solutions, and systemic safeguards is the way forward.”
Humanity must take a middle path?between naive optimism?and?fearful pessimism:?cultivating awareness of the ways new technologies can?perpetuate inequities?while simultaneously?taking action?to improve the fairness of technological systems.?Some have tried?to improve machine fairness by removing?identity markers, like?racial?data,?from data sets; but?machine learning algorithms can often?ascertain identity markers from other data, such as user communication patterns on social media.?When you simply restrict inputs, you also fail to address the root causes of inequities, which are the biases themselves.?It’s better to control outputs:?directing algorithms with more specificity about what the fair and equitable outcome you hope to create actually looks like.
Careful?auditing can help prevent?algorithmic models from reflecting human biases.
Most companies want to hire the best people. However, recruitment practices often yield inadvertently discriminatory results. In 2017, researchers from the United States and Europe created automated assessments of the world’s 500 biggest companies (in?2016)?to see which were engaging in discriminatory hiring?and failing to achieve organizational diversity.?Using deep-learning algorithms that predicted identity markers, the researchers determined that the executives and board members of top global companies were consistently failing to hire women and people of color to?positions of power.?Every single company, minus H&M, had a lower percentage of women in executive leadership roles than the percentage of women in the general population.?
Algorithmic hiring models reflect human?biases when the data they’re using contains biases. For example, if you use AI to choose executive candidates with the best recommendation letters, the model may disproportionately prioritize male candidates, passing by skilled and qualified women:?Research shows that leaders downplay women’s leadership potential when writing recommendation letters, highlighting qualities such as empathy instead.?To spot and correct data?biases, organizations must be proactive about auditing the outputs of their AI models.?Human resources can run hypothetical job candidates through their AI models to test for biases and?can choose to provide the model with more inclusive data sets.
“Algorithms spit out biases when the data presented to them is flawed or unequal, either because it is partial and incomplete or because it reflects past or current decisions, behavior patterns, or social realities that are unequal.”
AI decision-making can offer advantages: You can dissect — and correct — machine bias?more easily than you can flawed human decision-making. For example, if a company was accused of?hiring bias, a plaintiff attorney with access to its?data set and algorithm could assess whether this was true or not.?A company can also benefit from?creating predictive algorithmic models that help it screen a much larger pool of applicants for more nuanced qualities —?looking, for example, at their likelihood of becoming high performers?and staying with the company long-term.
Greater data transparency can help you know the value of your labor.
Technology can be a powerful tool that helps stakeholders, ranging from workers to governments, work toward a future of?financial equity.?When individuals and organizations can access vast amounts of data, they can better?identify and correct disparities —?such as gender pay gaps — and correct biases?that would otherwise place certain groups of people at an unfair disadvantage.?Lending practices have a long history of bias, as those in the credit industry have historically failed to give women, LGBTQ+ individuals, and people of color equitable access to lines of credit, insurance, and loans.?However, research conducted in 2019 at the?University of California, Berkeley found that algorithms?created with the intention of reducing bias in the?fintech industry were 40% less discriminatory in?determining?loan pricing?than humans.
“If you don’t know your worth, then you don’t know that you’re undervalued.”
AI and the societal shift toward greater data?transparency are empowering workers with a better understanding of the market value of their labor.?In fact, some governments, such as the state of Massachusetts,?have passed legislation that bans employers from asking prospective employees to disclose their past salaries.?Without such laws and data transparency, a woman who discloses that she was paid less than her male colleague, for example, might receive?a lower salary.
While some argue that?women should simply “lean in” and demand more money, this isn’t always effective: Research shows that companies are more likely to?penalize women more for initiating salary negotiations, as?employers perceived women advocating for themselves as too “demanding,” while praising men for their assertiveness.?But new digital resources, such as the worker survey platform Payscale, are bringing greater transparency to the salary negotiation process.?Also, AI’s predictive power is giving rise to new tools that can help?predict the outcomes of negotiations.
Feminizing AI assistants and chatbots can normalize existing inequities.
In 2014,?Hal Varian — the chief economist at Google –?envisioned a future in which everyone — not just the uber-wealthy — could afford servants, predicting that machine helpers, or “cyberservants,” would become?fixtures in everyday life.?Today, smart assistants provide a wide range of services, from ordering users’ food to telling?jokes — and their capacities are growing every day.?The tech industry has a decades-long history of giving?its machine assistants female voices in an effort to make these smart products sound “helpful, inoffensive,” and “eager to please.”?All the leading smart assistants on the market, such as Alexa and Siri, sounded feminine when they were first launched. There’s?a need to reflect on the preference to depict subservient robots as female.
领英推荐
“Queering voices — challenging the very assumptions of gendering, and disrupting narratives and traditional binary understandings of sex and gender — is important.”
The preference for female machine assistants is pervasive across industries: The top 10 health-care companies and?seven of the world’s 10 biggest airline companies consistently demonstrate this preference. For example,?UnitedHealth Group’s?chatbot, Missy, appears as a “smiling 3D?virtual female assistant,” as she helps users navigate?their website.?When companies?consistently?feminize?chatbots and voice assistants,?they’re reinforcing gender as a binary construct?and promoting outmoded views of the roles women occupy in society: They’re the helpers, not the leaders.?
Researchers are using new?technologies?to detect patterns that reveal representation gaps.
Researchers now have the capacity to?analyze massive amounts of visual, textual, and audio information to identify systemic inequities.?For example, a recent World Bank study examined transcripts of village meetings in Tamil Nadu, India using natural language processing (NLP) methods to screen for potential?gender differences related to influence. The researchers?found that women had considerably less influence: They were less likely to set meeting agendas and speak during gatherings, and state officials were less likely to respond to women’s requests.?NLP?enables researchers to identify and take steps toward correcting patterns that indicate unequal power dynamics?and opportunities across multiple contexts.?
“What if we considered challenges as opportunities to do better — opportunities not only to address technology failures but to use technology to tackle societal failures?”
Using AI, researchers can now?assess whether people with?different identity markers are getting equitable representation in various media forms.?University of Chicago researchers used AI trained to identify and predict people’s race, gender, and age to analyze?more than 1,000 children’s books. They also used?text-to-data methods to scan the books for words related to gender, color, and nationality.?The researchers found that “mainstream” market children’s books — as opposed to books specifically created to promote diversity —?tended to?feature far more?white male characters than female characters or people of color.?Machine learning and AI analytics can help?detect these sorts of?gaps in representation and biases?in a range of media industries and inspire people to create?more empowering and inclusive?narratives.
Technology can help protect people from bodily violence, abuse, and disease.
When NGOs, tech companies, academic researchers, and law enforcement share data, they can use tech as a force for good, protecting people from disease and harm.?For example, IBM and the Stop the Traffik initiative are helping the UN meet its Sustainable Development goal of ending human trafficking and modern slavery. The organizations are?developing a cloud-hosted data hub that helps organizations identify and anticipate the?risk of these harmful practices?in supply chains.
AI and the growing abundance of health data?can also help stakeholders across industries accelerate drug discovery and collaborate to prevent?the global spread of viruses.?It’s vital to democratize the use of?AI in medical research contexts. Otherwise, AI’s?potential may only benefit?pharmaceutical companies’ bottom lines and encourage them to focus?only on diseases that affect wealthy countries and individuals.?Individuals like Dr. Regina Barzilay,?AI lead of Jameel Clinic, MIT’s Center for AI and Healthcare, are leading the way in creating open-source solutions to top medical issues — like?Mirai, a technology that can predict?breast cancer five years before a patient would normally be diagnosed. Such?efforts can help ensure?improved?health outcomes for?everyone — not just the rich.
Algorithms and embodied robots are transforming the ways humans experience connection.
Algorithmic biases can shape how?humans connect and form social bonds. For example,?dating apps often filter out profiles of users whose identity markers, such as race, don’t align with another user’s declared or perceived preferences.?While you may argue that individuals have a right to their dating preferences, critics point out that filtering on dating sites can exacerbate existing class, racial, and social divides.?The growing prevalence of robots?with the capacity to “have sex” is also transforming the ways in which humans approach and define intimacy and emotional connection.?Many of these robots?reflect misogynistic fantasies about women, such as a big-busted model known as Jia Jia, who calls her male creators — who studied at China’s?University of Science and Technology – “lord.”
“We are not far away from creating emotional bonds between ourselves and the embodied robots in human form that will integrate into our everyday lives.”
Some robot makers argue that framing their creations solely as AI-empowered sex dolls is an oversimplification.?Abyss Creations’ “domestic companion” Harmony?attentively greets human users when they get home from work, cracks jokes, quotes Shakespeare, and remembers users’ stories.?Still, critics of robots with sexual capacities, such as Canadian attorney Sinziana Gutiu, worry that the prevalence of submissive robots that look like women will inspire violence against women in real life.?The?evolution of sex tech?triggers a need to expand awareness about the harmful?stereotypes and misogynistic perceptions?that inspire the embodiment of robots.
Roboticists have the power to disrupt stereotypes by creating robots that challenge assumptions.
Embodied robots can support humans in a wide range of functions, including care labor,?reception work, and space exploration.?The Japanese social robot?PARO?helps alleviate nursing home patients’ depression and?supports?them in their interactions with other humans.?Embodied robots are also entering the educational sphere. They can support teachers by offering more one-on-one guidance to special education students, helping children finish their homework, and more.?However, some?critics worry about the use of robots in connection with?vulnerable groups, such as children. They have expressed concerns about?privacy risks, consent,?and how the data these machines gather might be misused.
“We can create [robots] to be anything. Why not create them to be more inclusive and to challenge antiquated, stereotypical notions of identity and roles?”
Robots have the potential to surprise those they interact with by disrupting?expectations.?For example, NASA uses a feminine-looking robot, Valkyrie, to support in-space exploration —?an area typically dominated by men — while using a masculine-looking robot, Tank, as a receptionist —?or “roboceptionist.”?In contrast to the mental picture many have of a receptionist, Tank is programmed to act as a “tough guy” veteran,?who tells visitors in a deep?voice about his time working as a Navy SEAL.?Robots like?Valkyrie and Tank?demonstrate?the choice roboticists face when designing robots that will support humanity in?the future: You can create robots that cater to existing?biases or inspire imaginative new possibilities.