Facebook Update on the Content in the Feeds and Enforcement: What it is and What it Means
Photo by Glen Carrie on Unsplash

Facebook Update on the Content in the Feeds and Enforcement: What it is and What it Means

Roughly two weeks ago, Mark Zuckerberg released a current Facebook update which details the current state of how content in Facebook feeds is being enforced and what’s changing in the near future. As you can imagine, this is super exciting for marketers who are eager to understand what happens to their content on the Facebook feed. The update was quite dense, with lots of information. So here’s a quick summary of the most important key takeaways to help you understand in less words, and what these updates might mean for marketers.

If you’re interested, the full Facebook Update on Content in the Feeds & Enforcement can be found here.

The overarching jist of the update is that while Facebook exists to serve a purpose of connecting people and empowering people to have a voice, they recognise that they have a social responsibility to keep people safe and protect them from “polarization and extremism”. In a nutshell, Facebook are sorting out the bad stuff (bad stuff being “misuse, misinformation, and violence”).

The main points that come out of the update and what they mean for marketers are as follows:

1. Facebook create and manage the standards of the community

Basically, this means that Facebook have created their own set of guidelines as to what content gets to stay on their platform, and what content doesn’t, and are trying to evolve this to better serve people. How are they doing this? By employing a team of 30,000 people in different expert fields (psychology, sociology, you name it) to monitor and decide what is harmful content in different cultural contexts, countries and languages.

For marketers, this simply means they need to be aware of these standards and adhere to them. You can find the community standards here and further published guidelines here.

2. Facebook proactively monitor content to enforce the community standards

To ensure their community standards and guidelines are met with a level of accuracy and consistency, Facebook are further developing their artificial intelligence (AI) technology to identify harmful content before it even gets published. This allows for a stronger enforcement of the standards, and content is screened and can be removed before people even lay eyes on it.

As marketers and consumers on social media, this means we need to be more aware of the content we are posting, because if it’s bad or ‘borderline’, chances are it won’t make it up for people to see.

3. They’re giving power to the user

After guidelines are rounded out and enforcement processes are set, Facebook are giving power to the people to allow them to decide what they do and don’t want to see. They do this already, but this will relate more to the new guidelines, enforcement and the issue of borderline content. Borderline content is content that doesn’t violate the standards, but is very close to violation. Hence the term borderline. Facebook are going to let you make the choices as to what you want to see more or less of in terms of how close to the line content is, and will adjust your feed accordingly.

4. They’re fixing that pesky algorithm

We all want a magic solution to beating social media algorithms, or finding hacks to help get our content in front of people. Sadly, these attempts are usually fairly unsuccessful. The good news is, Facebook are addressing the algorithm biases that exist. What does that mean? Basically, Facebook are working to reduce the amount of misinformation or clickbait, and ‘fake news’ type content using their AI technology.

This should help ensure that you are seeing great, quality content more often. As a marketer, it simply means you need to make your content more sensational and Facebook will hopefully be more kind to you.

5. Facebook are educating you on the why and correcting their mistakes

A huge gap that exists with removed content is that people don’t get to learn why their content is removed from Facebook. The vague message received when something is removed is simply not enough to help prevent someone from posting the same, or similar content, again and again. With new transparency in their appeals process, people will get to learn why their content was removed, and errors in content removal can also be rectified.

This means a lot as a marketer, because you’ll be able to better understand what content you should be producing, but it also might help ensure your content isn’t removed on incorrect grounds.

That’s it! While the Facebook Update on Content in the Feeds and Enforcement contains a lot more in depth information than covered here, these are roughly the five main points to take away from it. The most important insight? If your content is borderline, you’ll have greater difficulty with these new updates. So switch up your game, stat.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了