Blind and Not Blind: The Social Media Dilemma

Blind and Not Blind: The Social Media Dilemma


In the real world, you cannot be blind and not blind at the same time. We have that straight, right? Because it seems like social media platforms do not.

Under the auspices of a malformed 1996 law called Section 230 of the Communications Decency Act, self-styled “social media platforms” (aka microblogging sites) like Meta, X, and Reddit can decide what they want to allow on their platforms, but cannot be held responsible for their content decisions, even if those content decisions play a role in large or small harms to individuals or the general public.

What they claim is that they are both blind and not blind at the same time, and here is how that works:

On a moment-to-moment basis, and entirely at the discretion of the platform, the platform can elect that it is either “blind” (a carrier like a telephone company) or “not blind” (like all traditional media companies that decide what to publish) depending on whether there is a perceived liability for the platform or not. This logical impossibility is formed out of the provisions of the abovementioned Section 230. Section 230 provides a legal safety net for social media platforms such that they can, in effect, publish lies, threats, and videos of atrocities meant to encourage more atrocities, without owning any responsibility for the damage their publication causes to individuals or the general public.

It puts them in a minority of entities that, under present law, can claim perfect immunity even as they monetize discord, hate, and even murder in the real world. The byproduct of their vertically integrated, entirely risk-free publication engine is, arguably, the end of civil discourse in the US. That the platforms are shielded from all liability was proven in 2023 when the Supreme Court ruled for Google in a case called Gonzalez vs. Google. Gonzalez claimed Google’s YouTube recommendation engine helped radicalize a person who styled themselves an ISIS-affiliated terrorist where they went on to murder dozens at a Paris nightclub in 2015—among them the daughter of the plaintiff. SCOTUS said the plaintiff had no case because Google was protected under Section 230.

There is little doubt that the YouTube recommendation algorithm played a role in radicalizing this terrorist. But the argument against Google (owner of YouTube) was cut down because SCOTUS said whatever Google’s recommendation engine did was perfectly legal because of Section 230 protections.


From New York Times article:


A Bad Bargain

Today we suffer with an almost unimaginable amount of coarseness in public life. From descriptions of courtroom flatulence to admonitions to “strap on a Glock” for election season, we have long left behind the notion that public discourse assumes civility. It was Donald Trump who stated that he could murder someone in public (“shoot someone on Fifth Avenue”) and not lose any votes. From that day forward, we have seen the very notion of civility in public discourse trashed, largely by those associated with the Trump candidacy. They have replaced civility with threats of violence, exhortations to hate, and an air of general mayhem.

Some may say that it is simply a matter of free speech; and that the First Amendment guarantees zero consequences for speech. However, this is a misunderstanding of the First Amendment. By law, not all speech is protected, even if lies, on their own, are a form of protected speech. What’s not protected is speech designed to incite mayhem or violence, which means that technically, anyone who posts to incite violence could be found liable for any trouble they cause. The problem, of course, is that the “person who posted it” is almost always untraceable, or is a bot, or from beyond local jurisdiction. What’s not untraceable is the enormous technological and business assets deployed by Social Media to attract, induce, publicize, and monetize such damaging content.

We already have laws in place to protect the public from violent rhetoric, obvious lies, and sinister manipulation from mysterious actors. These laws are overseen by the Federal Communications Commission and govern the type of content that traditional media companies can distribute without fear of prosecution. Naturally enough in an open society, these companies are free to publish nearly everything they want.

But under FCC laws—laws that we have all lived under for decades with little complaint—these media companies cannot publish pornography, personal threats, or known and obvious falsehoods that damage others. Never mind that they cannot publish or present harassment porn, doxxing efforts, or facilitate swatting and the targeting of juveniles. It has been recognized in traditional media that with great power comes great responsibility.

Social media turns that on its head, and we are all the worse for it.

Social media says they can have great power—greater power than any media that came before—but at the same time own zero responsibility.

If that seems like a lopsided bargain, perhaps that’s because it is a lopsided bargain.

Jonathan A. Greenblatt, ADL (the Anti-Defamation League) CEO issued a statement in 2021:

Tech companies must be held accountable for their roles in facilitating genocide, extremist violence and egregious civil rights abuses. We applaud Senators Warner, Hirono, and Klobuchar for their leadership in introducing a robust bill that focuses on supporting targets of civil and human rights abuses on social media while also addressing cyber-harassment and other crimes stemming from the spread of hate and disinformation. The sweeping legal protections enjoyed by tech platforms cannot continue. Click here to read the full statement.

We can prove how wrong it is just by performing a simple exercise I like to call “take away the computer”. This allows us to see exactly what the true dynamic is, without the confusion of bits and bytes and the excuses of technologists who believe they occupy a different world than the rest of us. If we take away the computer in this instance, we end up with a situation that very much resembles my original proposal, as follows:

You cannot be blind and not blind at the same time.

You cannot be alive and dead at the same time—Schroedinger’s Cat notwithstanding.

You cannot be free to make decisions and unable to make decisions at the same time.

The illogic of the social media proposal is obvious, very smug, and very damaging to the social contract that keeps us from a natural state of war.

What they are saying, in essence, is that they are above the law, and you are not.

Like all things based on agreements between individual sovereigns—we call them laws—this can be changed.





要查看或添加评论,请登录