Our three new cases; our Q2 2023 Transparency Report; and a readout from our LinkedIn Live on AI and content moderation

Our three new cases; our Q2 2023 Transparency Report; and a readout from our LinkedIn Live on AI and content moderation

Hello and welcome to Across the Board, the Oversight Board's monthly newsletter filled with updates about our cases, decisions, and stakeholder engagement activities.

This month: the Board overturned Meta's original decision to remove a video posted by a Cuban news platform on Instagram in which a woman protests against the Cuban government; we selected three new cases about cheap fakes, weapons in Sudan, and the 2023 Greek elections and published our Q2 2023 Transparency Report; our Board Members Suzanne Nossel, Ronaldo Lemos, and Nighat Dad discussed AI, altered media, and the future of online content moderation in a LinkedIn Live hosted by our Head of Public Engagement Rachel Wolbers; finally, some upcoming events and interesting reads from us.?

Thanks for reading,

The Oversight Board


?? NEW: Our Q2 2023 Transparency Report is out

Yesterday, we released our Q2 2023 Transparency Report. This?includes an overview of the Board's activities and data revealing positive changes as a result of ?our policy advisory opinion on Meta’s Cross-Check Program, to the benefit of Facebook and Instagram users.? ?

In Q2 2023:

  • We published decisions on six cases. Three of these were standard decisions: Armenian Prisoners of War, Brazilian General’s Speech and Cambodian Prime Minister, while a further three (Anti-Colonial Leader Amílcar Cabral, Metaphorical Statement Against the President of Peru and Dehumanizing Speech Against a Woman) were summary decisions. These examine cases in which Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention.
  • We also published a policy advisory opinion on the Removal of COVID-19 Misinformation, including 18 recommendations. In combination with the recommendations from other case decisions published in Q2 2023, we made 30 recommendations in total during this quarter.
  • We received 259 public comments to the Board ahead of the deliberations on our three standard cases and one policy advisory opinion (summary decisions do not consider public comments).
  • In addition, the overall number of recommendations fully or partially implemented by Meta increased by 16 recommendations (rising from 44 in our Q1 report to 60 in our Q2 report), representing the largest-ever jump between the publication of quarterly reports.?

Our analysis of Meta's implementation of Q2 2023 recommendations

Our report also notes that Meta has now established clear criteria around “allowlisting” or “whitelisting” practices (called technical corrections by Meta) in response to our cross-check policy advisory opinion. This has led to?the enforcement exemption list reducing by 55%. In our policy advisory opinion on cross-check, we also called on Meta not to operate this program at a backlog. Meta says it has now cleared all outstanding backlogs in its cross-check review queues dedicated to potentially violating content from entities on its lists.

As part of our commitment to transparency, we will continue to publish transparency reports on a quarterly basis. These will include data about how our recommendations are changing Meta’s approach to content moderation.


?? The Board overturns Meta's original decision in Call for Women’s Protest in Cuba case

On October 3, the Board overturned Meta’s original decisions to remove a video posted by a Cuban news platform on Instagram in which a woman protests against the Cuban government. ?

The case relates to a video of a woman in Cuba urging other women to protest the Cuban government. At a certain point, she describes Cuban men as “rats” and “mares” carrying urinal pots, because they cannot be counted on to defend people being repressed by the government.

The Board finds that, when read as a whole, the post does not intend to dehumanize men based on their sex, trigger violence against them or exclude them from conversations about the Cuban protests. With the woman using language such as “rats” or “mares” to imply cowardice in the precise context of Cuban protests, and to express her own personal frustration at their behavior, regional experts and public comments point to the post as a call-to-action to Cuban men. If taken out of context and given a literal reading, the stated comparison of men to animals culturally perceived as inferior could be seen as violating Meta’s rules.

However, the post, taken as a whole, is a qualified behavioral statement, which is allowed under Meta’s rules. The Board is concerned about how contextual information is factored into Meta’s decisions on content that does benefit from additional human review. In this case, even though the content underwent escalated review, Meta still failed to get it right. Meta should ensure that both its automated systems and content reviewers are able to factor contextual information into their decision-making process.


???We selected three new cases and opened our call for public comments

Recently, we announced the Board has selected three?new cases for consideration.?As we cannot hear every appeal, the Board prioritises cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta's policies. Our new cases:

Greek 2023 Elections Campaign cases

The two pieces of content, both on Facebook were posted by different users in Greece around the time of the June 2023 General Election. Meta removed these posts for violating its Dangerous Organizations and Individuals policy. It told the Board that Golden Dawn, National Party – Greeks, and Ilias Kasidiaris, a leader of both far-right groups, were designated as Tier 1 Hate Organizations or as a Tier 1 Hate Figure under the policy. Read more?here.

The call for public comments for this case is still open! If you or your organization feel that you can contribute valuable perspectives that can help with reaching a decision on our new cases, you can submit your comment via the link below. Deadline: 23:59 your local time on Tuesday, November 7.

Altered Video of President Biden?case

The case relates to a moment caught on video during in-person voting for the 2022 midterm elections where U.S. President Joe Biden places an “I Voted” sticker on his adult granddaughter and kisses her on the cheek. The Board selected this case to assess whether Meta’s policies adequately cover altered videos that could mislead people into believing politicians have taken actions, outside of speech, that they have not. Read more here.

Weapons Post Linked to Sudan’s Conflict case

The case,?which is linked to Sudan’s conflict, involves a Facebook post of an illustration of a bullet, with notes in Arabic identifying its different components. The Board selected this case to assess Meta’s policies on weapons-related content and the company’s enforcement practices in the context of conflicts. Read more?here.

The public comment period for the Altered Video of President Biden and Weapons Post Linked to Sudan's Conflict cases an has now ended. We appreciate everyone who has provided us with their insights and valuable input.?Over the next few weeks, Board members will be deliberating these cases. Once they have reached their final decisions, we will post them on the Oversight Board website.


AI and content moderation: How should social media moderate altered media?

We recently hosted a LinkedIn Live session with our Board Members Suzanne Nossel , Ronaldo Lemos (林纳德) , and Nighat Dad to discuss the challenges of moderating manipulated media and AI-generated content on social media.

The rise of deepfakes and cheap fakes has made this task increasingly difficult, as manipulated media can have detrimental effects such as swaying public opinion, fueling hatred, and undermining democratic processes. With technological advancements, it has become easier for anyone to create and share manipulated media, which in turn negatively impacts trust in authentic sources and democratic institutions.

During the discussion, we highlighted the significant risks faced by social media companies in moderating manipulated media. Altered media, such as deepfakes and cheap fakes, have the power to sway public opinion, fuel hatred, and undermine democratic processes. The advancements in technology have made it easier to create realistic altered media, leading to the proliferation of misinformation and a decline in trust in traditional news sources. Detecting and moderating deepfakes is a particular challenge that requires advanced techniques and human expertise. To address these challenges, social media platforms should establish clear guidelines, be transparent, collaborate with external organizations, and engage with users.

We also discussed the role of social media companies and government regulations in addressing altered media. Social media companies have a responsibility to ensure accurate and reliable content on their platforms, which includes detecting and flagging manipulated media. They can collaborate with reputable fact-checking organizations and utilize a combination of human moderation and AI technologies to combat the spread of manipulated content. Government regulations should provide a legal framework for addressing altered media, including guidelines, standards, and collaboration with international bodies. The specific approach may vary based on cultural norms, laws, and political systems, but the overall objective is to strike a balance between user freedom, data privacy, and human rights.

The session concluded with the announcement of a new case involving an altered video of US President Joe Biden on Facebook. The Board selected this case to examine issues related to manipulated media and Meta's policies on misinformation. You can read more about the case here.

We would like to express our gratitude to those who joined the session, and we encourage you to stay tuned for upcoming sessions on our account.


?????Upcoming events

Want to learn more about what we do and how we are pushing Meta to be more transparent? Join us for one of our upcoming events:?

  • On November 2, our Board Member Paolo Carozza will discuss?The Meta Oversight Board and the Self-Regulation of Tech Companies at the Oxford Institute for Ethics in AI. You can register to attend the event here.
  • On November 10 and 11, you will find us in Paris for the Paris Peace Forum, an annual global event that gathers world leaders, international organizations and civil society to foster dialogue and cooperation in finding innovative solutions to global challenges. Find out more here.


????Interesting reads?

Some interesting articles and blog posts on technology we have read this month:?


Sign up here to receive updates from the Oversight Board when we announce new events. cases and decisions.


Kari Swartzentruber

Mother ~ Coordinator of Chaos ~ Executive Virtual Assistant

1 年

Why can’t I contact you??

回复

要查看或添加评论,请登录

Oversight Board的更多文章

社区洞察

其他会员也浏览了