Content Moderation in the Modern Age
Photo by Nina Strehl on Unsplash

Content Moderation in the Modern Age

In this weeks newsletter, I dive into this topic both as a professional in the marketing world, as a social media site owner, and as a parent who does advocacy in the autism community.

These past couple of weeks have been a beehive of discussion about content moderation, community notes and fact checkers, and a bunch of politics that may just be adding a bit of mud in the process.

Bottom line here, many folks are concerned about the switch from content moderation by Facebook with it's 3rd party fact checkers to community notes.

Now, I've been doing online blogging and addressing online communities for over 20 years. For those social anthropologists, you'd find that much of what we see happen regularly in online communities have been happening in person for thousands of years.

So, grab your cup of coffee and let's have a short journey...

Years ago there was a website called Xanga.com. This was the parent company where Autisable, LLC first started.

Xanga was a social blogging community site, where users could share and express themselves and interact with each other. Many of the top users I connected with are still around, as many of us have stayed connected through other platforms, but the site itself will forever be on life support as all future development of the site stopped when the founders parents died.

Now, one of the concepts I learned while being a part of that site and working with the folks who worked at Xanga was how quickly a community rallied behind finding the truth. The single most important truth I learned is that over time, the community policed itself as over time the users of the site eventually became self-regulated - and it was an incredible site to see firsthand. There's something to be said in how groups of people develop and act together over time.

Now, was their content moderation set up like we have seen in recent years? Obviously not.

However, such things as spam accounts were quickly deleted, along with spam comments, as well as comments and posts that clearly violated community guidelines. Not all of the spam accounts and comments, but the ones that severely impacted an active users experience. (it was a system that didn't include bots and AI, after all)

People who shared and commented on content that violated laws or the site's community guidelines were reported by users and it was handled by the team. Nowadays, AI and bots scan everyones posts and flags those that may violate community guidelines.

It was a bit old school and a bit of the Wild West back then, but it worked. At least it worked to a point where those who used the platform regularly knew which accounts were whose and what those accounts were about. Many of us could spot those accounts at a moments notice, and either report them or just plainly ignore them. Similar to those accounts that utilize other people's content to create an account that may be monetized... sad that people do that - but it is an ongoing craft that tends to be used for nefarious reasons, but I digress.

Many viewed Xanga as an online journal to pens ones thoughts, often relegated to emo teens and millenials of the time. But the community developed into people of all ages sharing stories and ideas and concepts. Photographers, poets, storywriters and the like were all discussing a variety of thoughts and ideas.

In time they openly discussed issues and pointed out discrepancies in what folks were sharing. Sometimes this brought about arguements online, and other times it helped some folks discover more about themselves, about their writing, or about any subject.

In short, it was a site that promoted ongoing long form discussion.

Of course, this was well over a decade ago... well before Facebook became popular.

Recently we see this level of self-policing with those content creators on TikTok, making those folks a highly engaged community.

No technology or platform is perfect, and Xanga didn't have all the bots and ai moderation that many platforms now have... but the community shared the brunt of it... and it was ok. Over time the community on that dying platform did their best to save it... but such is the history of social media platforms, when a new one comes along, the old ones must adapt or die and/or fade away.

But lessons must be learned in the process, and those of moderation of an online community may have been lost somehow.

Third Party Fact Checkers or Community Notes

Recent news of Musk and Zuckerberg pushing out Community Notes over Fact Checkers has been deemed controversial. But to me, it's a natural progression of what should have been expected. And honestly, I welcome it.

The challenge with any company mitigating conversation is the issue of censorship. Social media has proven that people strive for one inherent truth: To Be Heard.

A platform that is provided for free under the guise of having the ability to say what's on your mind, only to have the experience of your voice being silenced when trying to discuss a topic doesn't make a good experience, does it?

In our world of technology and communication, truth is often held in the hands that control it. And let's be honest here, we are all bias to begin with. The only way to find what may work for an individual or group is to keep the dialogue going.

Let's throw this into a real world situation as it relates to topics often discussed within the Autism Community...

Ever have, or heard, of a post someone shared being immediately flagged by a platform and/or removed? or know someone who got put into "Facebook jail"?

How about trying to share content from one site to another about someones experience with using a specific remedy, and that site is blacklisted from being shared on that platform?

Much of this is due to Facebook having bots (programs) deployed to ensure that posts aren't going against terms of service (like illegal activities), or to address community guidelines (like bullying and the like). Once these systems flag the piece of content, then it gets processed and either the user gets a warning - but often the piece of content gets removed.

At the scale of content being shared on any platform, it has become necessary that some programs are deployed to assist in monitoring a group or a page, or an account. Many people welcome this technology as the aim is to help create a safer online community. Bots work well, for the most part, but like a lot of programs, they need regular work. And this requires constant human interaction, cost and development.

There are pros and cons to this, especially if you're a parent within the autism community, where you're trying to share and discuss issues associated with bullying, what behavioral issues are going on, along with what medications may work or don't work. Often posts and comments are quickly flagged and removed from a platform based on the AI bots flagging it as against community standards... when it's just a point of discussion. Then the parent has to appeal that decision. Context of the discussion isn't always taken into consideration, and it's within that context that we can understand more fully why a piece of content was shared.

The time and cost associated with ai programs and employees to manage content moderation is also immense. AI can handle the bulk, but even when AI gets it wrong, you have people needing to update the AI, test it, deploy the update, and then see if those updates work regularly. Then you have content that could spread like wildfire that could be true or not... or related to potential legal issues depending on what country the user is based in... all that gets taken into account in moderation.

Regardless of any potential bias in moderating content, the task itself is an incredible hurdle to overcome to begin with.

Should moderation be automated? or provided by a 3rd party fact checkers? Maybe a bit of both? Let's dive into a bit further...

Why community notes may be the right course of action.

We should remember that as humans we are incredibly biased to begin with......

Historically speaking, when people aren't being heard or lack an understanding because all aspects of what is going on aren't being shared - they know something 'isn't right', or the facts 'don't add up'. From there, conspiracy theories run rampant and subsequently arguements often ensue.

To me, community notes put the responsibility back to the users, requiring facts to be backed up by sources - and letting the end-user determine what is fact or fiction based on the sources provided. This harkens back to those days of Xanga - where people earned respect by being honest over time. At least, that was my experience on that platform. If you're an avid user of Reddit, you'll see these thought processes as well. Sometimes for the best, or not... but the trains of thought as to who, and how one came to those conclusions in their thought process are better understood.

I suspect that the use of community notes, mixed with AI and bots to assist in the process, will be the norm on many platforms as more folks get acclimated to them.

Social Media requires us to be social, and this requires us to interact with each other - learning to respect differences of opinion and find common ground. After all, we're having these chats online from all over the world, with a variety of backgrounds and opinions on a multitude of topics. Dialogue is imperative to gain understanding, and as a parent of a non-verbal autistic son - it's a skill that even I find a challenge at times.

As we address content moderation, I'm reminded of a quote by Rowan Atkinson where he said "The best way to increase society's resistance to insulting or offensive speech is to allow more of it."

Now that quote focuses more on insulting or offensive speech, but the concept is the same, in order to understand what in the world is going on anywhere in the world, we need to promote people's ability to communicate and discuss.... allow more of it.

Does this effort take work on our part? Absolutely.

The first sign of wisdom is when one comes to know that one does not know.

With our modern society it's become far too easy to just look at something provided by any source of media (social, traditional) and accept it at face value. I'm sure by now we've come to a point where we are longing for more detailed information in order to make a valued decision. We can take this last presidential election as a prime example of this. One candidate executed a standard model of running for an election, while the other just sat and talked with a podcast host for over an hour. People who had and took the opportunity to listen to that candidate just talking with no sound bytes won over the person who utilized more of a standard and controlled script. Regardless of what side of the political aisle you reside on, that tactic alone states that people want to hear an actual discussion and not sound bites.

To me, the advent of X (formerly Twitter) implementing community notes, and Facebook doing the same, is placing the responsibility back to the end-user even with some basic moderation to keep the truly illegal activities at bay. Musk and Zuckerberg may also be addressing two major issues and those not just being political. There's been a lot of discussion on how social media has influenced elections, and there's a fear that the information being spread isn't factual. The US Government, among other governments, have asked how social media platforms are ensuring true statements are being shared and false ones aren't. Now, that's fine to ask if you're a public figure with power and authority, but what about those who do the voting? Where do the fact checkers get their information? What did they get right and wrong? The public doesn't want people telling them what's true or not, they want information provided in such a manner that could be raw and unedited as well - providing more context of what is really going on.

Agree or don't, but we can already see the TikTok users rise up en masse in stating their concerns, primarily economic and ability to express themselves on a platform that really has their community of users at heart. We also see the use of community notes on X (formerly Twitter) coming into its own, showing and citing sources on how statesments being made aren't correct, and allowing the end user the opportunity to see all the sources cited as to why that statement may not be correct.

Opportunity for News Outlets and the Return of Investigative Journalists

Moderation of a community should happen, providing oversite to ensure that a users experience is a good one, and that those users who violate local laws are taken to task. However, no system is perfect and grace should be applied as reasonably as possible, IMO.

Social media has it's limitations, even with all the discussions going on with any given topic. Journalists and Official News outlets may have the opportunity to earn the publics respect again, by citing sources and doing more investigative journalism - rather than just putting a spin on whatever Reuters or AP puts forth on the daily wire.

What say you? Do you think Community Notes are the way to go? Or do you think a paid 3rd party group should be the voice of truth? I look forward to your comments and feedback.

If you haven't already, follow me on Patreon - where I am diving into the podcast and the development of Autisable more, as well as other topics like this one.

要查看或添加评论,请登录

Joel Manzer的更多文章

社区洞察

其他会员也浏览了