A “Duty of Care” for Facebook
Shelly Palmer
Professor of Advanced Media in Residence at S.I. Newhouse School of Public Communications at Syracuse University
French regulators have recommended requiring a “duty of care” for big social networks, meaning social networks should have a legal obligation to moderate hate speech published on their platforms. The regulators have no idea how this would or should be accomplished, but such a requirement makes sense to them. To his credit, Facebook’s CEO Mark Zuckerberg said, “We can make progress on enforcing the rules, but at some level the question of what speech should be acceptable and what is harmful needs to be defined by regulation, by thoughtful governments that have a robust democratic process.”
Governments and a Robust Democratic Process
Facebook is a true democracy. So are Google, Twitter, and every other successful US-based search engine or social network. Everyone gets one vote, and every vote counts. Your behaviors determine what information you see. The more you like something, the more of it you get. The less you like something, the less of it you get.
The United States is not, strictly speaking, a democracy. Scholars argue this point, but the United States is either a Constitutional Republic, or a Federal Republic, or a Democratic Republic, and a constitutional representative democracy. In its simplest terms, this means that we elect our leaders and let them decide for us how we are governed. If we don’t like their decisions, our constitution describes a process (elections) to ensure the peaceful transition of power.
A Shift in Constituencies
Global population is estimated to be 7.7 billion people. Approximately 4.2 billion people have internet access. Which raises the question, can a constitutional republic (or a bunch of other elected central governments) control a true democracy that is the direct voice of about 55 percent of the global population?
One Person, One Vote – Except …
People are no longer the only gatherers of information or possessors of knowledge. AI systems all over the world collect and process data in ways that humans cannot.
Do we need a traffic light at the corner of Oak and Main? Waze knows. Individual people may know. Remotely located representatives of the people may get word. But Waze knows.
Should Waze sit on the local-level, state-level, or federal-level committee charged with allocating money for traffic lights? How about all three committees? Waze’s vote would be the most informed vote about the traffic patterns at the corner of Oak and Main. Should a different AI system (for example, one that scores productivity of eligible government contractors) be on the committee to choose the vendor who installs this traffic light?
Waze does not need any human being’s opinion to validate the most efficient way to move traffic through intersection. (At some point we will probably let Waze or a system just like it control traffic lights to speed up our commutes.)
You can ask various AI systems the same question in different ways. You can score the vehicular traffic in terms of commercial vs. pleasure users, or safety or economic impact on Main Street’s businesses. Today, a person would have to gather all of that information and analyze it. This is not a good use of a human being’s time. In the very near future, machines will pick up the required data and score the need for a traffic light based on whatever ROI calculations humans choose. So, humans on the traffic light committee should vote on the ROI goal, not which intersection is most in need of which solution. Humans might be more sympathetic to people they know, or to others who they believe have greater needs, but sympathy and politics are separated by a very thin line, and neither is a reason to ignore the data.
Some call AI “intelligence separated from consciousness.” In essence, that is what US-based social networks and search engines are.
The Danger of the Great Unwashed
Wikiality, “the best narrative wins,” has all but replaced reality. Fiction often replaces facts. Lies are harder and harder to separate from truths. A pure democracy is truly dangerous to powerful people. Narratives are hard to control.
That said, the algorithmically based systems that govern our online world provide the most accurate reflection of the user population. Not the prettiest, not the most agreeable, but simply the best we have to date. And, most importantly, always to the benefit of the AI’s shareholders.
Not a New Issue, Just a New Scale
Benjamin Franklin was the Mark Zuckerberg of his time. He had a huge publishing enterprise, the largest media empire in the American colonies, and was also postmaster – first locally in Philadelphia, but ultimately of the United States. He had complete editorial control of what was published and how widely it was distributed.
The control of information was just as important to the American rebels as it was to the monarchy of the British Empire. “Duty of care” was not regulated; it was required. History may help us understand the quest for internet-based global narrative control by showing us where we came from. And we get to view it through the lens provided by wisdom earned over the past 243 years.
Maybe Tomorrow I’ll Have More Answers Than Questions
We are building a world our children and grandchildren will inherit. Can the course of our “the best narrative wins” (and “facts be damned”) future be altered by breaking up or regulating search engines and social networks? Is a new form of pure democracy trying to evolve? Are algorithms that automatically assemble celebratory Facebook videos featuring extremist content the price we have to pay for this new democracy? Is there a quick regulatory fix for echo chambers and comfort zones? Why can’t we just pass a law that says, “If you make something available online, you are responsible for it”? Would that solve the problem? And if this issue is so important, why are politicians just concentrating on AI systems that moderate and curate humanly understandable information (videos, websites, images, etc.)? Why aren’t they just as interested in the AI systems that will ultimately control vast budgets, project management, the flow of physical goods, and street traffic – to name just a few?
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. Facebook is a featured partner of the Shelly Palmer Innovation Series presented by The Palmer Group.
About Shelly Palmer
Named one of LinkedIn’s Top Voices in Technology, Shelly Palmer is CEO of The Palmer Group, a strategy, design and engineering firm focused at the nexus of technology, media and marketing. He is Fox 5 New York's on-air tech and digital media expert, writes a weekly column for AdAge, and is a regular commentator on CNBC and CNN. Follow @shellypalmer or visit shellypalmer.com or subscribe to our daily email https://ow.ly/WsHcb
CMA Planners is a woman-owned powerhouse event team offering logistical expertise and industry connections for the Planner Extraordinaire!
5 年Sad and so ture!
Anthropological consultant, owner Culture Contact
5 年I'm not sure where to start with how much it makes me worry that a lot of legislation that is being passed is passed with little knowledge of the subject at hand (not just tech, tbf). It does actually reflect the "facts be damned" you mention. It is definitely becoming a pattern of social behaviour. It's also odd how we shift blame. We don't so much blame the people who do bad things and the willing audience that will always find a way to tune in, but providers of the services. By that logic, the owner of a store where a hostage situation goes down is equally guilty as the armed robbers and people who revel in their violence. That's dangerous thinking, as it creates a certain way of perceiving said violence as neutral, even acceptable...
Senior Technology Manager at Winbond Electronics
5 年The most popular narrative wins? Or the most well-funded?