This week in social media – 27 May 2019
Dionne Lew
Strategic advisor, professional speaker, author, media commentator - strategic communications, social media
Your weekly summary of official social media news from Facebook, Twitter, LinkedIn, Instagram and Google.
If you’re interested in how the platforms are tackling the difficult issue of removing offensive content, then this is the update for you. The major platforms have all recently reported on initiatives or policies they’re using to create a safe environment online.
It’s particularly important for communications and social media managers to know where to go in the event such content finds its way onto your assets, so I’ve provided links to where and how to report abuse for each platform in the blog.
You can listen to or read this post.
Facebook has released its third Community Standards Enforcement report adding two new areas of enquiry; first, how much appealed content is subsequently restored, and second, efforts to remove attempts at illicit sales of regulated goods like drugs and guns.
The reports now include metrics across nine policies: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods, spam, global terrorist propaganda and violence and graphic content.
Significant efforts are going into proactively identifying content through AI before it is reported.
Facebook says that in six policy areas it proactively detected over 95% of content for removal, 65% for hate speech, taking down 4 million hate speech posts this quarter. Facebook is continuing to invest in technology to expand its abilities to detect this kind of content across different languages and regions.
To improve transparency Facebook has started to share the meeting minutes from its bi-weekly meeting where it determines updates to policies and is publishing a change log on the Community Standards website so everyone can see where exactly updates are made.
Additionally, as part of its efforts to enable academic research, Facebook has awarded grants for 19 research proposals across the world to study content policies and how online content influences offline events.
In case you missed it, in the US the Whitehouse recently called for people to report unfair censorship by social media platforms directly. You can hear CEO Mark Zuckerberg respond to a question from the BBC’s Dave Lee on the latest press call at 34 minutes.
- Full story and downloads.
- Press Call Transcript
- Press Call Audio
- Data Snapshot
- Hate Speech Proactive Detection
- Appeals Chart
- Regulated Goods Proactive Detection
Twitter issued a similar progress report in April which showed:
· 38% of abusive content that’s enforced is surfaced proactively by Twitter teams for review.
· 100,000 accounts were suspended for creating new accounts after a suspension during January-March 2019 –– a 45% increase from the same time last year.
· Twitters says it is giving a 60% faster response to appeal requests with a new in-app appeal process.
· 3 times more abusive accounts were suspended within 24 hours after a report compared to the same time last year.
· 2.5 times more private information removed with a new, easier reporting process.
Earlier this month LinkedIn also reported on progress against content and profiles that violate its Terms of Service and Professional Community Policies.
LinkedIn has made it easy to report anything promoting terrorism or violence by clicking on the three dots at the top right of any post, comment or message.
LinkedIn is also using new technology to help detect and remove fake profiles.
Creators will be interested to learn that IGTV now supports vertical videos.
To help EU citizens find trusted information about the European Parliamentary elections, Google has introduced a set of new Search features in the European Union.
When users search for instructions on how to vote they get details right on the results page. This data is sourced directly from the European Parliament to ensure its trusted.