Normalcy with AI & Diversity
Shashwati P
Award Winning Diversity & Inclusion Champion??LinkedIn Top Voice??Helen-Keller Awardee | D&I Strategist, Obsessed about creating Impact | LinkedIn Creator Accelerator CAP | Making Inclusive & Diverse Spaces a Reality
In designing the world using Artificial Intelligence (AI), the systems used reflect a standard view of the world. The systems processes new inputs based on a given (coded) model. Now one question, does the coded model has inputs for attributes that are not the norm. If yes, are they keyed in as normal or aberrant. The new input, does it resonate with the model or is it a deviation? Is it normal or abnormal? In most AI systems, this logic forms the core of how AI systems function.
Now, globally we have over 1 billion People with Disability living yet most institutions and society consider such individuals as “deficient”. Normal and abnormal . . . are often used to distinguish between people with and without disabilities. We have 10% of the global population form the LGBTQ+ community yet in many cultures, its not ‘normal’ have a same-sex partner or identify ourselves in a way they do. Even for people who are straight, as we understand our own sexuality, we deem some acts ‘abnormal’.
Normalcy, the way it is reflected in our cultures, at work and technology environments are translated to the AI systems built. The systems are then used to allocate resources and opportunities to people at large.
Here are some eye-catching, eye-brow raising results:
- Facial recognition technology is pretty good at identifying white people, but notoriously bad at recognizing ‘black’ faces. This, has led to the production of very offensive consequences. In 2015, Google’s image recognition software labelled African-Americans as “Gorillas.
- Facial recognition technology has also caused problems for transgender persons.
- In 2015, Google Natural Language Process Analyzer returned negative ratings for searches “I'm a homosexual" and "I'm queer" while positive ratings for “I am straight”. Google recognised the issue and promised to work on it.
- It is also important to take a note that AI targeted at enabling lives of people with disability, implicitly promises to make them more like able-bodied people therefore implying that non-disability is the norm. For example, an AI app helps people who are deaf to take part in spoken conversations. The premise here is that deadness hinders communication. But, many in the Deaf community claim themselves as not having a disability, but as a linguistic minority.
There are many studies which focus on the construction of normalcy itself. What is normal and what or who does the normal exclude? For example, to understand a body with disability, one must return to the concept of the norm, the normal body. The ‘problem’ is not the person with disabilities; the problem is the way that normalcy is constructed to create the ‘problem’ of the disabled person.” If we all did not have legs that walked, then using the wheelchair would be the norm.
Similarly, the norm of heteronormativity, the gender-binary of male and female, are standards that are socially constructed. In one age it was normal to have temple carvings like those of the famous Khajuraho and many smaller still functioning temples and ruins and its prohibitive to have such paintings or flexes on our walls now.
Its important that we check the standards that are coded into these systems, we also foresee the consequences of the additions. What norms are produced and applied by the AI system? What are the consequences for ones that do not fall outside the line? How are the systems creating categories and further marginalizing the already marginalized?
Some great point to ponder on. Dear Reader, would be great to hear your thoughts.
This article is based on my readings on AI research papers.
#diversity #inclusion #ai #technology #norms #gender #disability #queer
Your research points aptly highlight the anomalies in the technology built. It's been debated and acknowledged that technology inherits biases and companies have learned this very hard way, only when things were used in the real world environment. To their credit, they withdrew/cancelled/improvised/addressed the grave concerns promptly (well in most cases). Now, many companies are consciously using diverse talent pool to conceptualize, design and build solutions so that they are wholesome, minus biases. AI is like?play dough. Anyone can play with it whichever way, just that the dough has got in-built intelligence, which grows every time someone uses it. With time, it will start profiling what you make, your choices, your design style and color patterns etc.. Then it will start mapping you/your behavior with people with similar/diverse profiling and start reverse feeding you, auto-suggesting you and so on. So, skewed technology has to be corrected at levels, when it is a. conceptualized, b. Designed, c. Built and d. Used. Finally, leverage the use cases to weed out anomalies in future versions of the same or new product.?Please continue your research on YoY basis to draw comparisons on progress/decline made