Can We Rise Above Our Social Media Algorithms And Their Base Tendencies?
David Wojnarowicz - Untitled (Buffaloes)

Can We Rise Above Our Social Media Algorithms And Their Base Tendencies?

This is the third part of my series about how algorithms shape our media consumption and have changed the culture. You can read the first post, about Meta, here, and the second post, about TikTok, here.

Going back to the rise of “citizen journalism,” the Arab Spring positioned social media as a boon for society. It was a fresh and idealistic approach to reporting the world outside our windows, and gave a voice to people who were previously marginalized or outright silenced. The hope was that by drinking straight from the firehose, we'd somehow get to experience a cleaner view of the world, its people and their experiences. And by offering first-person testimony and a commingling of cultures, Facebook and Twitter immediately became essential and authoritative. But Web2.0 brought other platforms (WordPress and Tumblr) and protocols (RSS and AJAX) that lead even Google's algorithms to recognize and showcase the voice of the individual. What used to be unaccredited voices now became wildly emboldened as they grew their followings, and consequently, found their megaphone. The algorithms began to create false equivalency between being heard and being veracious, which is something that continues to plague our media 15 years later.

We can agree that there’s a spectrum from Occupy Democrats to CNN, Fox News and Infowars that plot a spectrum of partisan media. But those are just the accredited names, and they’ve each grown increasingly partisan over the past few years. They're partially being pushed further out by a vocal minority, which has been greatly emboldened by this age of social media, but we also now know with certainty that those algorithms have deliberately gamified our participation. And that's a problem.

Drawing once again from the Facebook Papers, internal researchers found that for their most politically oriented users, 90% of the content they saw was strictly about politics and social issues. So, while we're becoming more tribal, in the most primitive sense, Meta accelerated that trend by guiding people into an echo chamber of their most narrowly defined interests (and in doing so, provided so many of them with an all-defining sense of self). Now this may not be a problem in a clinical sense, it would be presumptuous for a piece of programmable code not to let each consumer pick their stripes. But in the case of Meta, those associated with the most right-leaning content also found 2.5% of their feed bogged down with blatant misinformation (which is significant relative to their volume of content consumed). And this brings us back to questions of responsibility. Should these algorithms be fully regulated, or overseen at the least?

Maybe what algorithms don’t do, and what they clearly should, is consider the spectrum of reliability across the sources they amplify. The people who code these algorithms will make a set of value judgments about what constitutes viable and desirable content. They’re maximizing for certain desired outcome events, which will naturally be self-serving to the ecosystem.?But to build an efficient and largely automated system, this notion of “desirable content” is going to be based on data feedback from user engagement... and there’s no definitive measurement for whether a piece of content is “good” in social media. E-commerce can at least use sales conversion to underpin their product preference engines (consider a retailer’s on-site product river), they know where the journey ends. But social media on the other hand, is blindly chasing incremental engagement and revenue, from wherever it can be found, while sustaining a brand and community, so there’s no single and defining KPI to drive the algorithm. Consequently, this notion of validation, which we’d otherwise take for granted, is open to interpretation.

Consider that today Meta collects over 10,000 different signals to predict how a user will engage each post, and their inference and weighting for those signals remain in flux. They now push less graphic violence and “disturbing” content (their words) than before, and they no longer use negative responses to push posts up the content river.?Additionally, they’ve retrained their algorithm to predict which posts would be good or bad for the world, and attempt to optimize for the former. So maybe the Tin Man got a heart and grew a conscience, or maybe this is just part of the transformational brand overhaul that turned Facebook into Meta. Either way, do we want our media platforms to favor content through a lens of virtue? Should we want Meta, Google and TikTok to be the arbiters of good taste??

That argument inevitably becomes a debate about the First Amendment, which to be realistic, gets us nowhere.?

The First Amendment is an absolute with many Supreme Court precedents one can cite, and they all tend to protect the most flagrant and irresponsible voices (down to even hate speech). It’s difficult to parse responsibility when these algorithms are said to be inherently objective, but they do impose a set of values no matter how implicit or inadvertent. This is the basis for Gonzalez v. Google, a case the Supreme Court just recently agreed to hear.?

Section 230 of the Communications Decency Act has previously protected big tech companies from the consequence of disinformation, discrimination, and the violent contents which they knowingly or unknowingly distribute. But after 26 years of protection, the basic premise is being called into question due to the nature and impact of… their algorithms. One could say that a dispassionately objective algorithm which is not concerned with some view of merit is the only thing that lets these brands be platforms and not publishers. That distinction has been everything for establishing their own protections. But if we try to divest the challenge from the players and their precedent, we can change the argument.

We all know what types of content will always get the strongest response, but it’s not just the salacious and divisive we need to worry about. The perils of algorithmic publishing also include the familiar, and that’s where confirmation bias comes in.?As big as the world has become with the Internet, our spheres of reference have gotten somewhat smaller. Netflix has actually carried us that way, they’ve realized that 86% of their content will find more engagement outside its native region, and so they’ve tagged their content in ways that can inform the algorithm to push global content that will inherently drive tolerance, understanding, and empathy for different cultures.

That Netflix produces their own content obviously gives them motivation to orchestrate exposure across borders, simply because they want to best merchandise the $17B a year they invest in content. Social media platforms on the other hand don’t produce their content, and they don’t even have the self-interest to diversify our exposure in ways that could benefit society. Through their algorithms, everything begins to feel either safe and familiar – or the funhouse mirror becomes dangerously existential. We’re consequently stuck in this cloying echo chamber that’s giving us what it thinks we want, based on the presumptions of data science. But algorithms change fast. Their views of our interests and values will inevitably drift, quickly pushing us further out to the horizon, and that horizon that will inevitably amplify the fringe of our interests and values.?

Algorithms are rapidly redefining our societal norms and our collective sense of propriety, so we certainly don’t want to give them even more editorial power. There’s an aspirational hope that these algorithms will come to expand our perspectives and bring us some sense of enlightenment, but the more immediate take is urgent and essential: If our platforms can’t at least control the spread of misinformation, and if they can’t resist being gamed for monetary, political or cynical gain, we’re destined to remain a herd of buffalo running off a cliff.

要查看或添加评论,请登录

Randy Schwartz的更多文章

  • Is DeepSeek Really a Watershed Moment?

    Is DeepSeek Really a Watershed Moment?

    What is DeepSeek? The Internet is abuzz with a new open-source AI model called DeepSeek. DeepSeek released R1 last…

    22 条评论
  • How TikTok’s Algorithm is Changing the Game

    How TikTok’s Algorithm is Changing the Game

    This is the second of three posts looking at how algorithms shape our media consumption. The first one was about Meta…

    6 条评论
  • How Social Media Algorithms Have Transformed The Content Itself

    How Social Media Algorithms Have Transformed The Content Itself

    This is the first posting in a series I’ve written about the toxic effects of social media on culture and society (it…

    2 条评论
  • CX: When Automation Goes Bad

    CX: When Automation Goes Bad

    Forrester famously reported that 89% of companies intend to compete on the basis of Customer Experience (CX). Customer…

    8 条评论

社区洞察

其他会员也浏览了