End filter bubbles or nobody can win - A new way to save compromise?
by Frank Bilstein and Frank Buckler, PhD

End filter bubbles or nobody can win - A new way to save compromise?

Filter bubbles supercharged by social network sites and digital news platforms are widely seen as a problem. But somehow, everyone believes just “the others” are blinded by them. In this article, I will illustrate that filter bubbles may very well be humankind's #1 challenge. The time has come to end this unintended but destructive consequence of artificial intelligence. I will show why simple regulation will not fix filter bubbles and suggest a concrete solution.

Take a moment and think back on the last ten years of your life: What has been your biggest personal learning? For me, it has been the value of compromise. I have seen too many of my tried-and-true convictions refuted or at least moderated in real life. Take for example minimum wage: Neo-classical theory told us it would only cut low-wage employment. Turns out reality is a lot more complex on the effects of minimum wage. Turns out reality is a lot more complex on a lot of things! Sorry for being a slow learner, but I finally started to understand why resilient societies are built on facilitating and sometimes enforcing compromise.?

So how do we improve our ability to gain consensus? Recently, I came across a video of an experiment that really rocked my world: People in the street were asked about their opinions on a number of contested issues (like “violence used by Israel against Hamas is - or is not - morally defensible”). When showing the respondent her answers at the end of the short interview, the interviewer used a simple trick to present her the opposite answers, asking the respondent to elaborate. What do you think happened??

My guess would have been that the interviewers were beaten up in the street, but no! “A full 53% of the participants argued unequivocally for the opposite of their original attitude” (Hall, Johansson, and Strandberg, 2012).?

Obviously, facts did not trigger this sudden change of heart, because facts are still subject to our very subjective interpretation: In their famous study "They Saw a Game: A Case Study, the psychologists Albert Hastorf and Hadley Cantril found that when the exact same motion picture of a college game was shown to a sample of undergraduates at each opposing school, each side perceived a different game, and their versions of the game were just as "real" as other versions were to other people.

What these experiments show is that our attitudes towards alternative viewpoints matter if we want to compromise. Good news: Those attitudes can be shaped (also shown by Leeper, Thomas, 2014). Public broadcasting (for all its deficiencies) has tried this for decades, at least to some degree, e.g., in the UK, Germany, and Japan.?

Yet compromise is joining the list of endangered species these days. Polarization has been on the rise for decades, but it seems to have become a challenge of global proportions.?

What is wrong with polarization you may wonder? There is evidence that a polarized environment “decreases the impact of substantive information“. In other words, facts no longer matter, party lines do. (Interestingly, scientific literacy does not inoculate against extreme viewpoints, while scientific curiosity - aka an open mind - seems to help).?

Still not concerned? Some say that the laissez-faire COVID-19 response in some countries, the Brexit referendum, and of course U.S. presidential elections since 2016 have been shaped by the polarization that is fueled by “filter bubbles” on social network sites. (I am not going to withhold the potential counterargument that polarization has particularly increased in age groups that are less likely to use the internet.)?

Runaway polarization risks political deadlock resulting in more global warming, more poverty, more violent fights for the proper distribution of wealth, water, and healthcare. It can also lead to more autocratic societies. It can lead to more hunger, violence, hatred, distrust, depression, and death. That’s why it is worth taking a closer look:?

“Filter bubbles”(aka “echo chambers”) describe the increasing probability of?

  1. you being only exposed to news that fit your current worldview and?
  2. your personal news feeds becoming more and more extreme.

A fascinating, data-driven analysis by Mark Ledwich shows the traffic flows between various YouTube political channels suggested by the site itself. Apparently, social network sites have inadvertently fueled the growth of filter bubbles not only by providing an efficient means of content distribution for basically everyone (it is not without irony that you most likely read this article on a social network site) but primarily by using machine learning algorithms that dramatically exacerbate the problem. This is at the heart of this, so we need to dig deeper.?

If you have not yet seen Netflix’s “The Social Dilemma” documentary, let’s take a brief look under the hood of your news feed. It’s worth spending a minute on the fundamentals of this phenomenon. Please bear with me and make an effort to understand this - it is important, really important. (Why? Because politicians around the world so far seemingly did not take the time to get it and consequently failed to act effectively!)

Naturally, digital media sites want us to keep reading, they want us to stay engaged. This is a perfectly legitimate objective for any commercial website because user engagement = time on the site = ultimately ad revenue. Since each of us responds differently to different pieces of content, they tailor each and every news feed. To do this for millions of different users, social network sites use deep learning artificial intelligence algorithms. These algorithms are constantly trained to predict the potential user engagement of every piece of content in your news feed. Training works like this: They take the content, language, and visual information of a post as input information, and then they measure actual user engagement (comments, shares, likes, etc.) as the desired outcome. Based on this closed feedback loop, the algorithms continuously predict what drives user engagement based on real-life data. This works just like Google being able to predict whether or not a picture shows a cat or a traffic light using examples that have been categorized by a human (“supervised learning”).?

And this is the key reason why it is wishful thinking to assume that social network sites will fix this themselves: these algorithms are one of the cornerstones, perhaps THE key ingredient to their ongoing success!?

One of the key triggers of user engagement is fake news because they travel “farther, faster, deeper, and more broadly than the truth” (as shown in the landmark study by Vosoughi, Roy, and Aral, 2018). That’s why they are prioritized by algorithms. But fake news is just one element of the problem. More importantly, extreme political views that reinforce users’ own opinions presumably follow the same path. That’s how they contribute to dangerous filter bubbles. Make no mistake: Social network sites are actively fighting fake news on various fronts, like restricting the activity of bots and adding friction to sharing certain news. But they would never abandon the core reinforcement logic that drives their news feed algorithms.?

It seems like a classic “prisoners’ dilemma”: Each social network site has an overwhelming incentive to use these algorithms because everybody else does it, too.??

The only way out? You guessed it. Someone has to force all of them to change. In comes government regulation.

However, in the past few years, much of the public debate and regulatory action has focused on the “fake news” aspect. For example, this year, France decided to establish a new anti-fake news agency to fight fake news coming from foreign sources (if you wonder what the 60 people initially assigned to this job can achieve, I am asking myself the same question, especially when you look at Facebook’s 10,000+ staff to fight illegal content…). Ahead of federal elections in Germany, Facebook is running an ad campaign on how they are fighting fake news:

No alt text provided for this image

Why the focus on fake news? Here is my little piece of conspiracy theory: Social network sites focus the discussion on fake news because that decoy is something that they can actually address. Few people seem to get that this is just a symptom of the underlying machine learning algorithms. Fixing fake news will not fix filter bubbles.

This April, the European Commission issued draft legislation on artificial intelligence and suggested “a regulatory framework for high-risk AI systems only”. However, the artificial intelligence that governs our news feeds on social network sites did not make it on the list of “prohibited'' or “high-risk AI systems” (as outlined in Annex III), at least not yet. That needs to be fixed a.s.a.p. Also, the regulatory actions suggested (“requirements for high-risk Ai systems”)? are very generic and focus on risk management procedures, leaving plenty of room for interpretation. If social network sites’ algorithms were to be added to the high-risk list, I would not be surprised to see this hashed out in courts for decades to come before anything happens.?

We don’t have that much time anymore. We need to be much more specific when we, the citizens, address this key threat to consensus and we need to do this now.?

A Counter-Algorithm for Content Display

Imagine a world…

  • where digital media still give you the exciting content that you (don’t know you) want to see - but at the same time, they expose you to insights that challenge your existing beliefs in a constructive, effective manner,
  • where social media fosters the effective exchange of ideas and debate by incentivizing respectful language,
  • where citizens still have diverging interests, perceptions, and opinions, but are enabled to explore solutions that serve most of us.

We want to explore a solution that uses the power of machine learning instead of trying to fight or destroy it.?

Science has already developed procedures for decades that effectively achieve consensus and change minds (Janis and King, 1954, recent and very relevant: Navajas, Joaquin, et al. 2019, corresponding TED Talk). Why should it not be possible to automate this and integrate it into the digital world? One challenge is that these concepts mostly rely on interpersonal contact. However, experts hypothesize that limited tweaks to algorithms may be sufficient to “limit the filter bubble effect without significantly affecting user engagement”.

Let’s summarize the scientific evidence on what we need to gain consensus:?

No alt text provided for this image

Our starting idea is simple: To gain consensus, we need to learn to embrace the counter-arguments. But - and this is a fairly new and big “but” - research suggests that simply being exposed to counter-arguments in your news feed actually increases polarization instead of decreasing it (I routinely force myself to read articles in the Fox News app and I am living proof of that effect). This happens probably because content is mainly addressed to in-group peers. Consequently, this content tends to be extreme and insulting to dissenting opinions, because this drives engagement and group-think. However, this naturally also decreases the likelihood to convince others. As we learned from Navajas, Joaquin, et al. 2019, moderate opinions are much more likely to win over other people's opinions.?

Instead of simplistic rules and generic regulations like the one suggested by the European Commission, we suggest harnessing the predictive, self-optimizing intelligence of machine learning. This is what we think will work:

  1. The existing algorithms that govern the news feed stay untouched. This is necessary for any platform to remain engaging. Without these algorithms, any platform eventually becomes worthless because most content will be irrelevant for us. They fill our echo chamber with “filter bubble content”.?
  2. Now we need to add “counter-content'' that is effectively challenging our current beliefs (which are already reinforced by “filter bubble content”). How does this work? As described above, deep learning algorithms are trained to predict the engagement of any piece of content. The same algorithms can also predict whether or not a piece of “counter-content” is decreasing the likelihood of engaging with “filter bubble content”.?
  3. The power of artificial intelligence will find persuasive tactics we may not even be aware of today. Think of it as two algorithms constantly hashing it out. Those algorithms can become much more effective than any televised U.S. presidential debate. Why? Because this algorithm will be trained not just to mobilize its own followers but also to convince other followers.?

Interested in the details? Here is how AI veteran and expert Frank Buckler describes it:?

  • P denotes a person so that the algorithm can adapt to her interests.
  • Let C be a set of information that describes a piece of content by using its text and visual information (“filter bubble content”).
  • L(C, P) is the likelihood that person P will engage with content C and has to be maximized. The mathematical function that calculates L based on C and P today is shaped by social media’s deep learning algorithms. It is not necessary to understand how they work. It is important to accept that they can estimate any unknown functions that predict L based on C and P if P has interacted often enough with different kinds of content C in the past. The more the person interacts, the better the prediction becomes.

What we now suggest is to include more information:

  • Let CC be a set of information that describes a second piece of content that is exposed to the person simultaneously or in close succession (“counter-content”).
  • L(C | CC, P) is now the likelihood that P engages with C given CC is exposed and has to be minimized.
  • The content C itself is minimizing the engagement with CC [=min L(CC | C, P)]. This makes sure that counter-content CC is contradicting and does not further exaggerate content C.

Is there a better solution?

Let's summarize other potential solutions under discussion:

  • Outlaw filter algorithms: As described above, this would impair the usefulness of content platforms so severely that this functionality is likely to happen illegally and/or indirectly. The same would happen if we outlawed filter algorithms just for politics or tried to ban political posts altogether.?
  • Introduce a mandatory “Driver’s License” (to use social network sites): While this may improve respectful language and help people to recognize fake news somewhat, it does not address the underlying problem: systematically misleading information and flawed learning through the selective presentation of information.
  • Increase support of public broadcasting: Unless public broadcasters use similar algorithms they will never stand a chance against digital media platforms that supercharge their user engagement with deep learning algorithms.
  • Mandate generic risk management for deep learning algorithms (like the proposed EU directive): Since these algorithms are mission-critical for the platforms’ success, generic legislation that leaves plenty of room for interpretation will inevitably result in decades-long court battles. Introducing a mandatory code of conduct for platforms’ use of deep-learning algorithms is likely to have the exact same effect.?

This Article is Useless

…unless you comment and share it.?

My intention in writing this article is to explore how we can change the world for the better. I want to directly influence policy-making on digital media. However, no article alone can achieve this. Only if readers comment and share this article, only if it becomes viral, will it have the chance ever to matter.

This is why I am asking you to comment and share your view.?

This is why I am asking you to share this article as broadly as possible.

If you think this article is bogus, PLEASE COMMENT.

If you think more people should read this article, PLEASE SHARE.

If you agree with my conclusion that we need a smart solution like a counter-algorithm to save our world,? PLEASE SHARE.

In any case, make up your own mind, but always remain curious.

Jens Bonerz

Leading Data Strategy & Governance - Pernod Ricard Western Europe

3 年

This makes a lot of sense.

Nitin Godawat

Managing Director at CREST Olympiads | Ex-Kearney

3 年

Well written article, Frank. You have taken one aspect of AI that impacts our lives. I, myself being a professional of AI, strongly believe that we don't really need the next wave of AI - it will have more negative impact than positive. However, not sure if there's a way out :-(

Abdi Scheybani

Managing Director at BTS GmbH - Business Transformation Services

3 年

Our world is currently plagued by climate change, autocratic regimes and growing social inequality (just to name a few fundamental issues). Given these trends the claim that filter bubbles “may very well be humankind’s #1 challenge” seems a bit farfetched. There is little doubt that filter bubbles do exist in social media and that they have a detrimental impact on rationale social discourse. But this impact should not be overestimated either. ? Let’s have a look at some facts regarding the online media consumption https://reutersinstitute.politics.ox.ac.uk/risj-review/truth-behind-filter-bubbles-bursting-some-myths: In the UK for instance roughly a quarter of the population uses social media as preferred access channel to news, while more than half of the people go to directly to their preferred news page or use search to find news (see this research). Search also uses algorithms but these are not personalized (apart from geolocation), therefore this channel does not contribute to filter bubbles (try it out: the Google news boxes mainly show mainstream media). The more or less conscious selection of news brands (directly of via search) is still the ?dominant approach to receiving news. Of course, the fact that already a quarter of the population is exposed to biased content seems like a legitimate concern – especially with regard to younger cohorts where social media consumption has an even higher share. But wait, the younger cohort does not seem to be the main driver of polarization in the net (as Frank & Frank also concede in their article). And another impact related key question ist: How many of the hard core social media users are really only exposed to the filter bubble without any exposure to alternative views be it offline or online? How big is this group really? ? So, it is not at all clear that the filter bubble effect really explains the polarization of the public discourse, e.g. in the US but also in Germany (especially with regard to AFD and the Querdenker-Movement). There are good arguments that deeper underlying economic and sociocultural forces have driven this process (see e.g. Alex Bruns' book "Are Filter bubbles real"). Social media echo chamber might exacerbate this development, but are not their root cause. ? Furthermore, I would argue that filter bubbles are not an invention of social media but have been present since the beginning of the news industry. True, in the 19th century mainstream media brands emerged (mainly out of commercial reasons by the way). But the political party press continued to have self-reenforcing filtering mechanisms. Just take the Weimar Republic as an example where especially fascist media (but not only) created their own bubble, drawing more and more people in their distorted world view. Digital social networks do not cause this kind of process, but make it ?more efficient. ? Another argument (https://www.vice.com/de/article/pam5nz/deshalb-ist-filterblase-die-blodeste-metapher-des-internets) that should be considered in this context is that the filter mechanism resides the user’s head rather than in the network. As studies show even hard core social media users are being confronted with mainstream media and thus alternative versions of the truth. But they chose to ignore this information. According to my own experience, AFD proponents often are aware of the counterarguments and opposing facts, but they just do not believe them. They simply refuse to accept the facts. Why these people developed this mental disposition is a deep question that goes far beyond the filter bubble-mechanism. ? Nevertheless I find it legitimate to consider regulatory measures. However, as with any regulation, it should not cause more harm than good. As far as I understand the proposal of the two Franks the idea is to create an arithmetic neutralization mechanism, e.g. a Querdenker post should be countered with e.g. a Sports article (if the reader prefers sport) so that the user is more inclined to read the politically correct sports article instead of reading the extremist content piece. The basic question here is: Where does censorship begin? Some people might feel really uneasy if they discover that the state “nudges” their personal information preferences. ? What seems more reasonable to me at first sight is a regulatory diversity requirement. According to this approach the content delivery algorithm should be set up in such a way that the user is being exposed to different content sources. Leaving the problem of classifying content sources aside, this would of course imply that mainstream users would also receive extremist content. This could even extend the reach of extremist content. So, be careful what you wish for. This could be just another example of good intentions and unintended consequences.

Philipp Plettenberg

Partner @ Ntsal | Driving Growth & Commercial Excellence

3 年

It reminds me of a short video that i saw recently where a guy googles studies why coffee could make you blind, and studies why coffee improves your eyesight. Challenging also when I think how to raise the next generation and explain this difficult playing field that we are leaving them with.

Jennifer Schenke

Synergien schaffen, Potenziale entwickeln

3 年

I am not exactly sure how the use of these counter-algorithms would be enforced. It sounds like a good idea on paper but don't we encounter the same enforcement/skirting issues as with other policy interventions. Who is able to check if the counter-algorithm really does a good (enough) job? If enough resources are devoted to its development?

要查看或添加评论,请登录

Frank Bilstein的更多文章

  • Will the Robot Revolution Supercharge the Climate Crisis?

    Will the Robot Revolution Supercharge the Climate Crisis?

    After decades of robots acting more like clumsy toddlers, we're now on the brink of a robotic revolution that could…

    7 条评论
  • Deutschlands PISA-Ergebnisse: Weltuntergang oder Fake News?

    Deutschlands PISA-Ergebnisse: Weltuntergang oder Fake News?

    Schon erholt vom diesj?hrigen PISA-Schock? Das Medienecho war wirklich erschreckend, der Untergang des Abendlandes…

    3 条评论
  • How Not to Journal or The Fall of a CEO

    How Not to Journal or The Fall of a CEO

    Can you imagine my excitement when handwritten diaries almost 100 years old turned up? And can you imagine my…

    6 条评论
  • 3 Years on FIRE: What I've Learned

    3 Years on FIRE: What I've Learned

    (FIRE - financially independent, retire early) Absolutely no regrets, not once! Most of the things I loved about my job…

    21 条评论
  • Fighting Our Food Fallacy

    Fighting Our Food Fallacy

    Eating less meat is one of the top drivers to bring down our personal CO2 footprint (and no, using fewer plastic bags…

    2 条评论
  • Thanks, Priya! 15 lessons on better gatherings

    Thanks, Priya! 15 lessons on better gatherings

    What makes a meeting - whether it is for business or pleasure - truly special? A unique dynamic among participants? A…

    5 条评论
  • My Quest for a Lie Detector in Price Negotiations

    My Quest for a Lie Detector in Price Negotiations

    Every one of us has to negotiate prices, whether it’s when buying a car, negotiating a salary, or selling goods and…

    13 条评论
  • Ten Years a Chief Memory Officer: Five Lessons of Comparative Journaling

    Ten Years a Chief Memory Officer: Five Lessons of Comparative Journaling

    When I discovered what I call ?comparative journaling“ more than 10 years ago, I would not have guessed that it would…

    17 条评论
  • My first lesson in online fundraising

    My first lesson in online fundraising

    Some charities give potential donors lots of flexibility when donating online (see below for example 1 by Wikimedia)…

    1 条评论
  • Tesla: why I really wanted to finally ?dump your ass“

    Tesla: why I really wanted to finally ?dump your ass“

    This is the story of how my marriage with Tesla turned sour over the years until I finally knew it was time to cut my…

    36 条评论

社区洞察

其他会员也浏览了