End filter bubbles or nobody can win - A new way to save compromise?
Frank Bilstein
Founder of Donanto Charitable Foundation | Partner emeritus at Kearney
Filter bubbles supercharged by social network sites and digital news platforms are widely seen as a problem. But somehow, everyone believes just “the others” are blinded by them. In this article, I will illustrate that filter bubbles may very well be humankind's #1 challenge. The time has come to end this unintended but destructive consequence of artificial intelligence. I will show why simple regulation will not fix filter bubbles and suggest a concrete solution.
Take a moment and think back on the last ten years of your life: What has been your biggest personal learning? For me, it has been the value of compromise. I have seen too many of my tried-and-true convictions refuted or at least moderated in real life. Take for example minimum wage: Neo-classical theory told us it would only cut low-wage employment. Turns out reality is a lot more complex on the effects of minimum wage. Turns out reality is a lot more complex on a lot of things! Sorry for being a slow learner, but I finally started to understand why resilient societies are built on facilitating and sometimes enforcing compromise.?
So how do we improve our ability to gain consensus? Recently, I came across a video of an experiment that really rocked my world: People in the street were asked about their opinions on a number of contested issues (like “violence used by Israel against Hamas is - or is not - morally defensible”). When showing the respondent her answers at the end of the short interview, the interviewer used a simple trick to present her the opposite answers, asking the respondent to elaborate. What do you think happened??
My guess would have been that the interviewers were beaten up in the street, but no! “A full 53% of the participants argued unequivocally for the opposite of their original attitude” (Hall, Johansson, and Strandberg, 2012).?
Obviously, facts did not trigger this sudden change of heart, because facts are still subject to our very subjective interpretation: In their famous study "They Saw a Game: A Case Study, the psychologists Albert Hastorf and Hadley Cantril found that when the exact same motion picture of a college game was shown to a sample of undergraduates at each opposing school, each side perceived a different game, and their versions of the game were just as "real" as other versions were to other people.
What these experiments show is that our attitudes towards alternative viewpoints matter if we want to compromise. Good news: Those attitudes can be shaped (also shown by Leeper, Thomas, 2014). Public broadcasting (for all its deficiencies) has tried this for decades, at least to some degree, e.g., in the UK, Germany, and Japan.?
Yet compromise is joining the list of endangered species these days. Polarization has been on the rise for decades, but it seems to have become a challenge of global proportions.?
What is wrong with polarization you may wonder? There is evidence that a polarized environment “decreases the impact of substantive information“. In other words, facts no longer matter, party lines do. (Interestingly, scientific literacy does not inoculate against extreme viewpoints, while scientific curiosity - aka an open mind - seems to help).?
Still not concerned? Some say that the laissez-faire COVID-19 response in some countries, the Brexit referendum, and of course U.S. presidential elections since 2016 have been shaped by the polarization that is fueled by “filter bubbles” on social network sites. (I am not going to withhold the potential counterargument that polarization has particularly increased in age groups that are less likely to use the internet.)?
Runaway polarization risks political deadlock resulting in more global warming, more poverty, more violent fights for the proper distribution of wealth, water, and healthcare. It can also lead to more autocratic societies. It can lead to more hunger, violence, hatred, distrust, depression, and death. That’s why it is worth taking a closer look:?
“Filter bubbles”(aka “echo chambers”) describe the increasing probability of?
A fascinating, data-driven analysis by Mark Ledwich shows the traffic flows between various YouTube political channels suggested by the site itself. Apparently, social network sites have inadvertently fueled the growth of filter bubbles not only by providing an efficient means of content distribution for basically everyone (it is not without irony that you most likely read this article on a social network site) but primarily by using machine learning algorithms that dramatically exacerbate the problem. This is at the heart of this, so we need to dig deeper.?
If you have not yet seen Netflix’s “The Social Dilemma” documentary, let’s take a brief look under the hood of your news feed. It’s worth spending a minute on the fundamentals of this phenomenon. Please bear with me and make an effort to understand this - it is important, really important. (Why? Because politicians around the world so far seemingly did not take the time to get it and consequently failed to act effectively!)
Naturally, digital media sites want us to keep reading, they want us to stay engaged. This is a perfectly legitimate objective for any commercial website because user engagement = time on the site = ultimately ad revenue. Since each of us responds differently to different pieces of content, they tailor each and every news feed. To do this for millions of different users, social network sites use deep learning artificial intelligence algorithms. These algorithms are constantly trained to predict the potential user engagement of every piece of content in your news feed. Training works like this: They take the content, language, and visual information of a post as input information, and then they measure actual user engagement (comments, shares, likes, etc.) as the desired outcome. Based on this closed feedback loop, the algorithms continuously predict what drives user engagement based on real-life data. This works just like Google being able to predict whether or not a picture shows a cat or a traffic light using examples that have been categorized by a human (“supervised learning”).?
And this is the key reason why it is wishful thinking to assume that social network sites will fix this themselves: these algorithms are one of the cornerstones, perhaps THE key ingredient to their ongoing success!?
One of the key triggers of user engagement is fake news because they travel “farther, faster, deeper, and more broadly than the truth” (as shown in the landmark study by Vosoughi, Roy, and Aral, 2018). That’s why they are prioritized by algorithms. But fake news is just one element of the problem. More importantly, extreme political views that reinforce users’ own opinions presumably follow the same path. That’s how they contribute to dangerous filter bubbles. Make no mistake: Social network sites are actively fighting fake news on various fronts, like restricting the activity of bots and adding friction to sharing certain news. But they would never abandon the core reinforcement logic that drives their news feed algorithms.?
It seems like a classic “prisoners’ dilemma”: Each social network site has an overwhelming incentive to use these algorithms because everybody else does it, too.??
The only way out? You guessed it. Someone has to force all of them to change. In comes government regulation.
However, in the past few years, much of the public debate and regulatory action has focused on the “fake news” aspect. For example, this year, France decided to establish a new anti-fake news agency to fight fake news coming from foreign sources (if you wonder what the 60 people initially assigned to this job can achieve, I am asking myself the same question, especially when you look at Facebook’s 10,000+ staff to fight illegal content…). Ahead of federal elections in Germany, Facebook is running an ad campaign on how they are fighting fake news:
Why the focus on fake news? Here is my little piece of conspiracy theory: Social network sites focus the discussion on fake news because that decoy is something that they can actually address. Few people seem to get that this is just a symptom of the underlying machine learning algorithms. Fixing fake news will not fix filter bubbles.
This April, the European Commission issued draft legislation on artificial intelligence and suggested “a regulatory framework for high-risk AI systems only”. However, the artificial intelligence that governs our news feeds on social network sites did not make it on the list of “prohibited'' or “high-risk AI systems” (as outlined in Annex III), at least not yet. That needs to be fixed a.s.a.p. Also, the regulatory actions suggested (“requirements for high-risk Ai systems”)? are very generic and focus on risk management procedures, leaving plenty of room for interpretation. If social network sites’ algorithms were to be added to the high-risk list, I would not be surprised to see this hashed out in courts for decades to come before anything happens.?
We don’t have that much time anymore. We need to be much more specific when we, the citizens, address this key threat to consensus and we need to do this now.?
领英推荐
A Counter-Algorithm for Content Display
Imagine a world…
We want to explore a solution that uses the power of machine learning instead of trying to fight or destroy it.?
Science has already developed procedures for decades that effectively achieve consensus and change minds (Janis and King, 1954, recent and very relevant: Navajas, Joaquin, et al. 2019, corresponding TED Talk). Why should it not be possible to automate this and integrate it into the digital world? One challenge is that these concepts mostly rely on interpersonal contact. However, experts hypothesize that limited tweaks to algorithms may be sufficient to “limit the filter bubble effect without significantly affecting user engagement”.
Let’s summarize the scientific evidence on what we need to gain consensus:?
Our starting idea is simple: To gain consensus, we need to learn to embrace the counter-arguments. But - and this is a fairly new and big “but” - research suggests that simply being exposed to counter-arguments in your news feed actually increases polarization instead of decreasing it (I routinely force myself to read articles in the Fox News app and I am living proof of that effect). This happens probably because content is mainly addressed to in-group peers. Consequently, this content tends to be extreme and insulting to dissenting opinions, because this drives engagement and group-think. However, this naturally also decreases the likelihood to convince others. As we learned from Navajas, Joaquin, et al. 2019, moderate opinions are much more likely to win over other people's opinions.?
Instead of simplistic rules and generic regulations like the one suggested by the European Commission, we suggest harnessing the predictive, self-optimizing intelligence of machine learning. This is what we think will work:
Interested in the details? Here is how AI veteran and expert Frank Buckler describes it:?
What we now suggest is to include more information:
Is there a better solution?
Let's summarize other potential solutions under discussion:
This Article is Useless
…unless you comment and share it.?
My intention in writing this article is to explore how we can change the world for the better. I want to directly influence policy-making on digital media. However, no article alone can achieve this. Only if readers comment and share this article, only if it becomes viral, will it have the chance ever to matter.
This is why I am asking you to comment and share your view.?
This is why I am asking you to share this article as broadly as possible.
If you think this article is bogus, PLEASE COMMENT.
If you think more people should read this article, PLEASE SHARE.
If you agree with my conclusion that we need a smart solution like a counter-algorithm to save our world,? PLEASE SHARE.
In any case, make up your own mind, but always remain curious.
Leading Data Strategy & Governance - Pernod Ricard Western Europe
3 年This makes a lot of sense.
Managing Director at CREST Olympiads | Ex-Kearney
3 年Well written article, Frank. You have taken one aspect of AI that impacts our lives. I, myself being a professional of AI, strongly believe that we don't really need the next wave of AI - it will have more negative impact than positive. However, not sure if there's a way out :-(
Managing Director at BTS GmbH - Business Transformation Services
3 年Our world is currently plagued by climate change, autocratic regimes and growing social inequality (just to name a few fundamental issues). Given these trends the claim that filter bubbles “may very well be humankind’s #1 challenge” seems a bit farfetched. There is little doubt that filter bubbles do exist in social media and that they have a detrimental impact on rationale social discourse. But this impact should not be overestimated either. ? Let’s have a look at some facts regarding the online media consumption https://reutersinstitute.politics.ox.ac.uk/risj-review/truth-behind-filter-bubbles-bursting-some-myths: In the UK for instance roughly a quarter of the population uses social media as preferred access channel to news, while more than half of the people go to directly to their preferred news page or use search to find news (see this research). Search also uses algorithms but these are not personalized (apart from geolocation), therefore this channel does not contribute to filter bubbles (try it out: the Google news boxes mainly show mainstream media). The more or less conscious selection of news brands (directly of via search) is still the ?dominant approach to receiving news. Of course, the fact that already a quarter of the population is exposed to biased content seems like a legitimate concern – especially with regard to younger cohorts where social media consumption has an even higher share. But wait, the younger cohort does not seem to be the main driver of polarization in the net (as Frank & Frank also concede in their article). And another impact related key question ist: How many of the hard core social media users are really only exposed to the filter bubble without any exposure to alternative views be it offline or online? How big is this group really? ? So, it is not at all clear that the filter bubble effect really explains the polarization of the public discourse, e.g. in the US but also in Germany (especially with regard to AFD and the Querdenker-Movement). There are good arguments that deeper underlying economic and sociocultural forces have driven this process (see e.g. Alex Bruns' book "Are Filter bubbles real"). Social media echo chamber might exacerbate this development, but are not their root cause. ? Furthermore, I would argue that filter bubbles are not an invention of social media but have been present since the beginning of the news industry. True, in the 19th century mainstream media brands emerged (mainly out of commercial reasons by the way). But the political party press continued to have self-reenforcing filtering mechanisms. Just take the Weimar Republic as an example where especially fascist media (but not only) created their own bubble, drawing more and more people in their distorted world view. Digital social networks do not cause this kind of process, but make it ?more efficient. ? Another argument (https://www.vice.com/de/article/pam5nz/deshalb-ist-filterblase-die-blodeste-metapher-des-internets) that should be considered in this context is that the filter mechanism resides the user’s head rather than in the network. As studies show even hard core social media users are being confronted with mainstream media and thus alternative versions of the truth. But they chose to ignore this information. According to my own experience, AFD proponents often are aware of the counterarguments and opposing facts, but they just do not believe them. They simply refuse to accept the facts. Why these people developed this mental disposition is a deep question that goes far beyond the filter bubble-mechanism. ? Nevertheless I find it legitimate to consider regulatory measures. However, as with any regulation, it should not cause more harm than good. As far as I understand the proposal of the two Franks the idea is to create an arithmetic neutralization mechanism, e.g. a Querdenker post should be countered with e.g. a Sports article (if the reader prefers sport) so that the user is more inclined to read the politically correct sports article instead of reading the extremist content piece. The basic question here is: Where does censorship begin? Some people might feel really uneasy if they discover that the state “nudges” their personal information preferences. ? What seems more reasonable to me at first sight is a regulatory diversity requirement. According to this approach the content delivery algorithm should be set up in such a way that the user is being exposed to different content sources. Leaving the problem of classifying content sources aside, this would of course imply that mainstream users would also receive extremist content. This could even extend the reach of extremist content. So, be careful what you wish for. This could be just another example of good intentions and unintended consequences.
Partner @ Ntsal | Driving Growth & Commercial Excellence
3 年It reminds me of a short video that i saw recently where a guy googles studies why coffee could make you blind, and studies why coffee improves your eyesight. Challenging also when I think how to raise the next generation and explain this difficult playing field that we are leaving them with.
Synergien schaffen, Potenziale entwickeln
3 年I am not exactly sure how the use of these counter-algorithms would be enforced. It sounds like a good idea on paper but don't we encounter the same enforcement/skirting issues as with other policy interventions. Who is able to check if the counter-algorithm really does a good (enough) job? If enough resources are devoted to its development?