THOUGHTS:  A.I. + LGBTQ+ = ?!?

THOUGHTS: A.I. + LGBTQ+ = ?!?

(this was a post I did a few weeks ago and wanted to be sure it was more accessible via LinkedIn Articles)


Sooooo … let’s talk about “AI” and the LGBTQ+ Community.

PSEUDO-LEGAL DISCLAIMER: this isn't a deep academic research endeavor (I’m a nerd: I know what that entails!). But this is an anecdotical observation with a bit of research behind it*?

I was thinking about artificial intelligence (AI) and how it relates to the Queer Community. I did a little digging to see if anyone has played in this space before. In 2021, Jamie Wareham wrote an article??for?Forbes.com, citing?anthropologist Mary L. Gray who said AI will “always fail LGBTQ people.” So, armed with my initial idea and reading Wareham’s article and a few others, I wondered if Wareham and Gray were correct two years later. So, me and a colleague did a little experiment.

We went to four popular AI image generators (Image.art, Tome, Fotor, and GenCreft) and simply entered in the terms “Lesbian,” “Gay,” “Bisexual,” “Transgender,” and “Queer” to see what the AI elves produced. I *kinda* anticipated what results we’d see: lack of diversity and many stereotypical images leveraged to influence the output. Here’s some overall interpretations I have of the collection of LGBTQ images:


  • The images produced were of flawless humans overall
  • The AI tools produced similar (identical?) images for multiple “letters” in our Rainbow Family. For example, it appears that GenCreft interpreted “Gay” and “Queer” to create almost identical images
  • The vast majority of images produced what appears to be very few people of color
  • The images produced often created what appear to be very binary stereotypes: very “femme” images for “Lesbian” and “Queer” and very “masc” images for “Gay” and “Queer”


The biggest telling image was when the AI could not produce an image in the first place. Two tools came back with “The generated content violates our Community Guidelines” (Image.art) or “Image not Approved!” (GenCreft), which really had my head scratching. What were we violating? What image created was not approved simply for typing in the term “Gay” or “Transgender”?

MY TAKE-AWAYS ... As with all technology, AI is only as good as the humans who program it. Creating equitable and fair AI systems starts with being aware of potential biases that can be baked in, and it's crucial for organizations creating AI to understand their datasets and how they impact their algorithms.

Organizations also need to analyze existing datasets to ensure that no group is unintentionally favored, or another not represented.

Plus, it's important for AI organizations to build diverse teams who are building and managing the AI, including folks from the LGBTQ+ community. This will help ensure an inclusive approach to building algorithms.So:

  1. Attempt to thwart biases of both humans and datasets upfront;
  2. Be sure your team of AI experts is diverse; and
  3. Monitor outcomes to look for biases and adjust 1 and 2 as needed.?


WHAT DO YOU SEE? Use the chat below to chime in on how these AI tools interpreted the “LGBTQ” prompts.



No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image



REFERENCE:??https://www.forbes.com/sites/jamiewareham/2021/03/21/why-artificial-intelligence-will-always-fail-lgbtq-people/?sh=356e16d7301e?





Jason zhang

marketing director

2 个月

maybe you can try VMate AI,this is very interesting LGBTQ AI chatbot:https://vmateai.com/

回复

要查看或添加评论,请登录

Steve Yacovelli, Ed.D. (he/him)的更多文章

社区洞察

其他会员也浏览了