Building a Better Online World: An AI Approach to Flag Sexist Content on Social Media
AI generated image of a woman holding her phone

Building a Better Online World: An AI Approach to Flag Sexist Content on Social Media

Today, on International Women’s Day, the fight for gender equality extends beyond physical spaces and into the digital world. Social media has become an integral part of our lives, fostering connection and amplifying voices. However, this online space can also be a breeding ground for harassment and discrimination, disproportionately impacting women and girls. Sexist narratives and online violence can silence women, pushing them out of online communities.

A recent AI for Good Webinar, “Unveiling Sexist Narratives: AI Approach to Flag Content on Social Media,” explored how Artificial Intelligence (AI) can be leveraged to tackle this challenge. The session, co-organized by ITU, UN Women and UNICC, shed light on a project that developed an AI model to identify sexist text content on social media posts across Spanish-speaking countries in Latin America.

Setting the stage for change

The webinar, moderated by Sylvia Poll, Head of the Digital Society Division at ITU, highlighted the concerning rise of online violence against women. She emphasized the importance of leaving no one behind in the digital age and stressed the need for collaborative efforts from governments, the private sector, academia, and civil society to ensure AI is used ethically and responsibly.

“We cannot try to solve this issue of closing the gender digital divide alone and we need to know what is happening on the ground,” Poll remarked, underscoring the need for a multi-stakeholder approach.

Building an AI solution for a complex problem

Anusha Dandapani, Chief Data & Analytics Officer at United Nations International Computing Centre (UNICC), delved into the specifics of the AI model. She explained how the prevalence of sexism often goes unreported, making it difficult to quantify the issue. To address this, the project focused on building a model that could effectively detect sexist narratives in social media content.

“In order for us to understand the specific topic of how gender-based stereotypes or sexism is being sort of relevant in the content that we have to analyze, we need to have a clear and consistent criteria,” Anusha explained.

Natural Language Processing (NLP) and machine learning were central to the model’s development. The team employed pre-trained word embeddings, a technique that captures the semantic relationships between words, to train the model on a dataset of labeled content. This dataset included examples of both sexist and non-sexist language.

A crucial aspect of the project was ensuring the model’s cultural and linguistic sensitivity. Unlike English, where most AI models are developed, sexism can manifest differently in other languages. The project addressed this by custom training the model with Spanish-specific data, enabling it to better recognize the nuances of sexist language in that context.

Read the full article here.

Watch the full Webinar here.


Want to join the Global Summit on 30-31 May 2024?

Register now for the AI for Good Global Summit! Free for all to participate both in person or virtually.

For an exclusive, immersive AI for Good experience, purchase your Leaders Pass, now available at 15% discount until 31 March 2024.


Our Summit Sponsors

Our Year-round Sponsors


Enjoyed this newsletter?

Subscribe here to our newsletter AI for Good Insider to receive the latest AI insights.


This edition of AI for Good Insider was curated by AI for Good?Junior Communication and Social Media Officer, Celia Pizzuto.

要查看或添加评论,请登录

AI for Good的更多文章

社区洞察

其他会员也浏览了