Start With A Fact-Based World
Kevin McDermott
Framing Growth Strategy, Telling Growth Stories | Thought Leadership | Scenario Planning | Strategic Communications
My colleague Andrew Green and I were talking recently about times when we imagined a solution to a client problem that depended on the application of a technology that existed only in our imaginations. We let our imaginations run and before long we had some ideas for restoring a fact-based world.
Because if there is anything the majority agrees on at the moment it is this: Our inability to argue from a shared set of facts is tearing us apart.
This is said to be a function of our tribalism—that is to say, the comfort we find among people who share our take on the world. That can make us impervious to ideas that compete with what we already believe. As the historian Timothy Snyder observed recently of the social-media bubbles around which tribes organize they erode “the distinction between what feels true and what actually is true.”
The hopeful thing is that while some of us may be prejudiced about individuals outside our tribe in the abstract we behave like human beings toward one another in the particular. It is hard to think evil thoughts about someone else if you feel you know them.
So what if we could imagine tools that made it harder to reduce other people to caricatures?
Our imagined inventions are subject to two broad rules. One, they may not infringe on any person’s freedom of speech or of thought. Two, anyone may opt out. But if they worked they could increase the common space in our national Venn diagram—a claim not all technologies can make.
Social-media corrector
The saying “a lie gets halfway around the world before truth puts on its boots” has been attributed to lots of people. What is known for sure is that it has never been more apt than in the age of social media. It is in the pernicious nature of the Internet that it does not give truth time to fill its sails.
But what if lies could be stopped at the source? What if there were an AI solution trained to distinguish between opinion and assertions of fact?
It is not hard to envision a technical fix that could recognize assertions of fact in audio or text and then search the Internet for supporting evidence. The text of any claim would change color depending on its degree of truth: green for hard evidence, yellow for soft, red for never happened.
Humans already do this pretty rapidly, even during presidential debates. NewsGuard Technologies recently announced a new service to rate news and information sites by employing trained journalists. We envision something close to an AI-enabled public utility not unlike Wikipedia or Associated Press, which enjoy nearly universal credibility. Its goal would be to leave the post-truth world and return to a shared set of facts from which we could argue.
This would not be suppressing speech. Merely grading it.
The CBO on steroids
We are familiar with using Monte Carlo simulations in financial planning. What if we could deploy simulations to describe the impact of competing public policies?
What we envision is something like the Congressional Budget Office on steroids. If we have an infrastructure bill and a defense-appropriations bill, a climate bill and a health-care bill, our simulations will play out the consequences of making—or not making—them policy. Choose x and you have less to spend on y, and here is what you’ll get. Tweak that and see what the consequences are. You are, in a way, red-teaming for unintended consequences.
The tool, unlike ourselves, would not be value-driven. The framing of questions put to it would be neutral. AI does not take sides. It simply answers the questions we pose to it.
Our simulator would make our policy choices—and their consequences—harder to evade with equivocation and grandstanding. We may ignore what the simulator tells us. But we would have to acknowledge that that’s what we were doing.
Experiencing the inner life of others
When we argue about hard things like the second amendment or immigration we sometimes look at our opponents and ask, What could be they thinking? What if it were possible to actually know?
Our brains are big zip files with lots of storage and data switching. Already we are learning to harness the thoughts of paralyzed individuals to animate implants, say, that move a prosthetic hand. It is not hard to imagine reverse engineering the insights this work is generating to download someone’s neural “file” and then temporarily upload it for us to experience personally. Then we could have an answer to the question, Why do you feel the way you do?
OK, this one is tricky. Ownership of our cerebral data would need to be absolute. Otherwise advertisers would be taking up full-time residence in our skulls. Most of our emotional life would have to be out of bounds, though the boundaries might blur fast given the way attachment to our personal privacy steadily weakens.
See: We admit possible flaws in our ideas. Was that so hard?
There will always be refuseniks, people who will not consider any fact or experience that challenges their settled opinion. Knowing that an honest search for truth was out there being employed by their neighbors might bother them a little. That could mean better conversations.
For now maybe that’s all we can hope.
This article was written in collaboration with Andrew Green of Broad Reach Growth.
Framing Growth Strategy, Telling Growth Stories | Thought Leadership | Scenario Planning | Strategic Communications
3 年Fascinating to watch the warring claims to truth between Russia and the West in the past few months, or rather how that war is waged. Maybe the answer is a distributed network of critical thinkers: https://bit.ly/3sbL85P
Framing Growth Strategy, Telling Growth Stories | Thought Leadership | Scenario Planning | Strategic Communications
3 年From Poynter: 'Fact-checkers use automation to maximize their impact'. The future comes at us fast: https://bit.ly/35KUf1A
Framing Growth Strategy, Telling Growth Stories | Thought Leadership | Scenario Planning | Strategic Communications
4 年Elizabeth Seger and her colleagues at Cambridge University recently coined the term “epistemic security” to describe the condition of truth well guarded.?“Even if it was clear how to save the world,” they write, “a degraded and untrustworthy information ecosystem could prevent it from happening.” https://bbc.in/2Z4DFq0