Surviving Social Media
Over 50% of LinkedIn posts are written using AI.1 This is not one of them.
I don't come on LinkedIn often. I don't actually spend much time on social media – except Bluesky, which reminds me of Twitter when I found that now-lost corner of the internet in 2009 – but, as with almost anyone my age, one of the first social media accounts I created was on Facebook. I still have it, but I've not posted there in about half a decade.
Back in the early days though, nobody was really sure exactly how to use these new sites, including the people who ran them. People mostly built up lists of their friends and were presented with a chronological feed of their musings. Great for finding out what Aunty Doris was doing, but not so great for paying for all those servers needed to store photos of dogs and babies. After all, Facebook isn't a charity.
To keep the lights on, social media companies needed to attract advertisers. Companies pay for ads in places people spend time, so the next step for any social network was to find a way to keep their users coming back more regularly, and for longer. Doing that meant turning to psychologists2 who could help create algorithms, designs, and features that keep us addicted. These are the same techniques that make gambling addictive, playing into our most primal need for gratification. They scratch an itch.
As the number of users grew, so did the need for a steady flow of this self-generated content to keep them engaged. People posting content –?creators – needed to feel the rush of pleasure to keep them posting. That meant showing them just how much people loved their content: presenting analytics on views and likes and interactions. By reviewing these stats, creators could hone their craft, maximise their reach, and ultimately keep their audiences coming back for more. In return, social network owners could build better algorithms that target creator-supplied content at like-minded people, improving the quality of their feeds, and making them more likely to return (and therefore more likely to engage with advertising).
The problem with an explosion of content, though, is that the companies who run social media sites now had to keep tabs on many millions times more data than ever before. This data is a double-edged sword: it helps social networks target advertising and therefore increase profits, but it also makes it harder to moderate exactly what their platforms are being used to promote. The days of Aunty Doris' dog photos are long behind us, and now social media sites have to content with verbal abuse, lies, and advanced disinformation campaigns run by state-sponsored agents.
Over the years, different platforms have tried different techniques to minimise the spread of dangerous information through their sites. An optimist would say this desire to moderate was entirely altruistic, but in reality a social network used to spread hate isn't the sort of place that companies want to place ads, and governments have something to say about becoming a forum for hate speech. So some platforms created large teams of moderators, while others focused on more user-led moderation in the form of what Twitter called Community Notes. The former allowed more strict oversight at the expense of costing more – both in money and the toll it takes on the moderator's mental health3 – while the latter means problems are more likely to be found faster by users than a dedicated team could ever manage.
Facebook's approach has, for many years, been heavily focused on human moderation. Their oversight team grew to many thousands of individuals, all attempting to keep conversations on their platform palatable to their users, and also their advertisers. Recently, however, they announced a plan to move to a using Twitter-style community notes. For Meta – Facebook's owner, who also oversee Instagram and WhatsApp – this means reducing the cost of their moderation team. For their users, it means more nuanced and rapid responses to flagging false information. A win-win, surely?
Unfortunately, a community-driven moderation process has some flaws. Because users have a less detailed understanding of policies and legislation, they can flag content that doesn't meet the criteria for removal more easily than a trained moderator would. To ensure this doesn't result in the mass removal of content that's 'edgy but allowed', community notes are designed to comment on the original post, but not remove it. As such, more extreme content that a professional Facebook moderator would have removed may now stick around for much longer.
The thing is though, harmful content is also controversial. With controversy comes argument, and with argument comes engagement. For the social network owner, this means more time spent on their platform, and for their advertisers, that means more opportunities to get their message in front of users. As long as the content the ads is sat next to is at least palatable, then having it sat around longer than it would have done under a moderator-based scheme is actually better for Facebook than removing it would be.
The longer harmful content is visible for, even if it has a note attached, the more damage it can do and the more money it can make. Because social networks are built to reward us for creating and sharing content that makes the network money, they actively encourage more of this controversial content.? As more controversy is encouraged, networks make more money from more engagement, and more people are served up this nastier content.
The flow of this kind of content can, sometimes, get too much. In some cases, it might mean advertisers withdrawing their ads from a platform that is seen to promote views that don't align with their brand. Market forces may then help pressure networks into improving their moderation or, if they're bankrolled by enough other funding, it might have little impact at all. Twitter survived an exodus of advertisers after its sale because Elon Musk has ridiculously deep pockets and could protect it from their annoyance with his right wing sympathies. Meta, through years of honing its targeted ad business, is relied on by thousands of small companies for cheap promotion who have nowhere else to turn. They'll survive a controversy or two without their bottom line even wavering.
So, if the platform can carry on as its content becomes more extreme, what happens? Eventually, some users will start being served content that they disagree with. This disagreement initially helps feed the arguments that promotes the spread, but if the deluge is too large, or the content too extreme, those who disagree will eventually disengage. Disengagement could mean spending less time on the platform, or even leaving it entirely. This disengagement quietens the voice of those who disagree with a controversial opinion, creating echo chambers of algorithmically-perfected posts.
When there are multiple competitors in the social media space offering similar services, moving to a place where the conversation is more agreeable isn't that difficult. The easier it is to make the move, the faster it happens, and the quicker people leave an argument to find a space where people are nicer – or at least agree with them more. This human tendency to find places where peers are more agreeable means different social media sites more rapidly become echo chambers, filled more and more with people who share a viewpoint. The more this happens, the more pressure the remaining dissenters feel to conform, or to leave.?
Some dissenters can't leave, or don't feel strongly enough one way or another to put in the effort. People who don't find it as easy to switch to other platforms – elderly relatives, or those who rely on Facebook, Instagram, or WhatsApp for vital personal or business communications – may find themselves locked into an ecosystem that presents them with opinions they don't fully agree with. Over months of being fed controversial content by an algorithm designed to keep them engaged, the likelihood of them starting to entertain the validity of these assertions increases, as does the social pressure to conform in what might be the only community they have access to. Someone who started out on the fence, or even disagreeing entirely, ends up stuck in an echo chamber they can't escape.
领英推荐
An echo chamber is bad. It makes it harder for people to listen to reason, and it makes it very easy to vilify anyone who doesn't have a voice at the table. When you're spending more time seeing, hearing, and discussing opinions you agree with, you lose sight of the fact that your opinion might need challenging. You become falsely confident, failing to critically question the basis for why you believe what you do. If someone comes along to tell you that your opponent is crazy, you'll more easily believe them because everyone you come across agrees with you, so you must be right.
This, then, is how divisions are formed. This is how people go from healthy debate to blind loyalty. This is how, over time and without realising it, you stop listening to reason and start listening to the tribe around you for fear of being cast out after one too many questions. Instead, you hit the like button on every post you see, share it with your network, and add a comment praising the creator or laughing at someone who disagrees. Social media companies have monetised echo chambers, they've created applications that promote blind loyalty because it drives engagement. And for them, engagement means money.
Social media platforms argue that this is merely their defending of your right to free speech. The thing about free speech, though, is that it's not an all-powerful defence. Just like with any freedom, there are limits to what you're allowed to say. The reason those limits exist is because your right to free speech shouldn't endanger the safety of others, or the general morality of society. There are many situations where limits are stricter than this core tenet, and sometimes they are stricter even than the situation really requires. However, it is a fundamental truth that your ability to say what you want should not damage other people.
When Meta announced Facebook would no longer be moderated by a central team, they announced some other changes too. One of these was the removal of restrictions on referring to sexual orientation or gender identity as a mental illness.? To explain this change, they framed it as a protection of free speech and debate:
We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 'weird'.
To be clear, debate is important. Disagreement in a forum where both sides respect each other and can have an open discussion about why they hold particular opinions is vital to finding truths and developing our understanding as a society. A post next to an advert for dropshipped products or a new cryptocurrency drop is not the place for healthy debate.
Facebook, like almost all other social networks, is designed to breed controversy. It's designed to make sure that the moment you call someone mentally ill for being gay, as many other people see it, like it, and reshare it as is possible. It is designed to force people who disagree with that sentiment to see it more and more often until they either capitulate or leave. Eventually, with networks the size of Facebook, the opinions shared on them become so pervasive they risk becoming consensus not because they are factually accurate, but because they are never questioned.
We live in a world where the truth is no longer important, and we elect leaders who make easily-debunked claims that play into our fears. We develop these fears by reading overhyped articles shared on platforms designed to keep us scrolling not because they educate us, but because they entertain us.? They ask us to ignore facts for fear of ridicule, and they present loyalty to the cause as the only way to save us from threats they themselves invented.
The existence of LGBT+ people does not threaten society, and it is not unnatural. In fact, the presence of bisexual and homosexual behaviour in a group seems to have social benefits, and the development of queer (sexual and gender) identities occurs repeatedly in many isolated communities across both the human and animal kingdom.? Promoting the discussion of queer identity as a mental illness is scientifically inaccurate, detrimental to society, and downright dangerous.
In the time it's taken you to read this article so far, around 13 young queer people have seriously considered committing suicide in the US alone.? The suicide rate in the young LGBT+ is over 4 times higher than their peers,1? with the rate highest for transgender individuals. We also know from research that young queer individuals are more likely to spend time online11 – finding other queer people is easier with more access to a larger population, especially in areas where it's frowned upon – so are more likely see newly-acceptable content claiming their sexuality or gender identity is the result of mental illness. Free speech has a cost, and in this case the cost is the lives of young queer people searching for acceptance.
The fact many individual lives are lost because of discrimination-disguised-as-debate is horrific, and it's just the beginning. Making queer identities controversial by normalising the debate of their existence empowers more malicious behaviour too. It makes it easy for policymakers to justify the removal of access to vital protections like PrEP and cheap/subsidised testing for STIs more common in queer communities. This, in turn, could result in an increase in cases of HIV and other life-limiting conditions. "After all," they'll say, "why should we be funding controversial treatments like these?" What is starts a for-profit company adjusting their rules to increase their profit –?or continue operating unquestioned under a new political regime – becomes something far more sinister: it becomes the basis for a movement to delegitimise the existence of a persecuted minority.
The overwhelming majority of people know at least one LGBT+ person, and almost all those who think they don't are mistaken only because their peers don't feel safe openly admitting their orientation. Legitimising the discussion of queer identity as a mental illness drives that wedge deeper into society, creating rifts that stop vital discussion, limit the advance of healthcare and the treatment of disease, and ultimately increase the likelihood of what should be entirely preventable deaths.
Only two parties benefit from Meta allowing this kind of content on their platform. Meta themselves – they can weaponise it to increase engagement, as well as minimise the risk of a political attack against their monopoly – and anyone who wants to legitimise the discrimination of queer people. The weight of scientific evidence proves that LGBT+ identifies are not the result of mental illness, attempting to legitimise the claim that it is does nothing other than endanger the lives of very real people. For cisgender, heterosexual people, this small change to Meta's moderation policy might seem inconsequential. For the rest of us, it means surviving social media –?and the impact it's going to have on the social and political landscape – just got a whole lot harder.
Banking on a better future - Triodos
1 个月This is a brilliant explanation of the link between social media economic model, their need to create addictive content to keep $$ growth coming, and how that drives dangerous divisions. Please read.
Community & Partner Marketing @ Unily | Opinions on Employee Experience & Future of Work
1 个月This is a great write up Drew. I didn't realize the scale of the policy changes that accompanied the move away from human moderation, it's crazy! No doubt the reduced hate speech guidelines will lead to more homophobic, racist and ableist content that could endanger safety too.