The real danger of AI

Many pundits are worried about robots taking over the earth, enslaving humans (believing them to be an inferior race), causing wars, or at least becoming so indispensable in daily life that they effectively control our future. We’re seeing more and more talk about “responsible AI, “AI for good”, “ethical AI”, and even Isaac Asimov’s three laws of robotics are being discussed academically.

These are perhaps cogent topics for discussion, but overlook the true and immediate danger behind AI as being used today, namely the polarizing effect it has on our societies. In many places we are seeing the populace split into two non-interacting camps, such as the Trump lovers vs. the Trump loathers, the left vs. right in Europe, and the Bibi-ists vs. the anti-Bibi-ists in Israel. In many places these two camps are of approximately equal size, causing divisive elections eventually decided based on small random fluctuations.

My tenet here is that this situation is the direct result of AI. No, not a sinister plot of super-intelligent AI masterminding a take-over of the world, merely collateral damage of AI being used for seemingly social purposes.

Let me explain. Social networks (e.g., Facebook and Twitter) have become an essential element in society. They have replaced traditional journalism as a major factor in determining the opinions of a large part of the population. And social networks use AI to advertise products, and more importantly they use AI to keep people “on platform”, that is to keep them “hooked”.

These two uses of AI are similar – they employ a user’s previous behavior to predict his or her interests, and then serve up content tailored to interest that specific user. That is why social networks are such strong advertising tools – they can predict what the user desires before the user even knows of the desire. And that is why social networks are so addictive (explaining my use of the word “user”).

A necessary byproduct of this is that AI decides not only what advertising the user sees, but what posts, blogs, and news items are presented. And this means that a person with rightist views will never see posts from somewhat left of center, but will see views more or less to the right (you can replace rightist with leftist, Trumpist, Bibi-ist, or whatever you like).

Over time this exposure to views exclusively belonging to one camp leads a person, even one initially only slightly off-center, to believe not only that these views make sense, but that these are the only possible views. More than the two opposing camps being separated geographically, they are separated socially. And the distance between the camps widens over time.

To study this effect I ran a series of simulations. I started with a population with views distributed around the center of the spectrum, with a Gaussian distribution. I then let random people “tweet”, but only exposed people on the same half of the spectrum to these tweets. This tweet then slightly influenced the views of those exposed to it, moving them in the direction of the tweeter’s stance.

The results are startling. Almost immediately the center of the political spectrum disappears, and the gap between the camps widens significantly over time. No matter how little each tweet influences those viewing it, the accumulated effect is inexorable. After enough time has elapsed the camps become world apart, with no possibility of communications between them, let alone someone switching camps.

initial distribution
distribution after short amount of time
distribution after a long amount of time


In these simulations we see that the extremes also shrink, but slightly different assumptions (e.g., assuming people with extreme views tweet more frequently or are more persuasive, or exposing people only to tweets more radical than their own views) change that aspect.

Interestingly, the loss of the center is so fast that the relative size of the two groups remains at its initial value. So, starting with an unbiased Gaussian, the two camps will end up of equal size. Had there been an initial offset, the camp sizes reflect this offset.

Occasional exposure to tweets of the opposing camp may lead to occasional jumps from camp to camp, but do not avert polarization. It would seem that the only solution is to turn off the AI. But the people running the social networks have become so addicted to AI that there is little chance of that happening any time soon.


Baruh Hason

Senior System Architect on cellular and wireless systems

4 年

Thanks for modeling and simulat?ng the obvious (but not seen so clearly). Few intuitive additions: The reduction of the extreme tails has another impact. Creative thinking is not distributed (practically suppressed), for good or for bad. If the choice of either part is not random (as in your simulation) but biased, I guess, the resultant bi-partite distribution will also be biased (social engineering).

回复
Alex Byrley

Senior Radar Engineer | Ph.D. in Electrical Engineering

4 年

I think you are very right about this issue. Tristan Harris (https://www.tristanharris.com/) has also said much about this topic. I think you'd like his stuff.

回复

Very interesting. How much different or accelerated is the AI influence compared to old fashioned preferences? E.g. people reading a left/right leaning newspaper. Watching a left wing or right wing news channel. Also socializing with the same people might cause the same group think. Any way to tell?

回复
Alexander Vainshtein

Principal System Architect at Ribbon Communications - IP/Optical Division

4 年

Very convincing! Congratulations, Yaacov!

回复

要查看或添加评论,请登录

Yaakov Stein的更多文章

  • What the Internet can teach us about the past - Part 3 - my own grandfather

    What the Internet can teach us about the past - Part 3 - my own grandfather

    We are all used to searching the Internet for information about what is happening today. And without the Internet to…

    9 条评论
  • What the Internet can teach us about the past - Part 2 - the grammar book

    What the Internet can teach us about the past - Part 2 - the grammar book

    We are all used to searching the Internet for information about what is happening today. And without the Internet to…

    1 条评论
  • What the Internet can teach us about the past - Part 1

    What the Internet can teach us about the past - Part 1

    We are all used to searching the Internet for information about what is happening today. And without the Internet to…

    1 条评论
  • ChatTuring vs. ChatGPT

    ChatTuring vs. ChatGPT

    Last year Blake Lemoine, a senior software engineer from Google, claimed that Google’s LaMDA chatbot had achieved…

    7 条评论
  • Health effects of 5G millimeter waves

    Health effects of 5G millimeter waves

    Recently we have been hearing statements to the effect that there is absolutely no proof that exposure to low levels of…

    13 条评论
  • The disaggregated smartphone

    The disaggregated smartphone

    Part 1 – The Smartphone Smartphones are on almost everyone’s list of the greatest inventions of all times. In fact I…

    11 条评论
  • Maslow's Pyramid Today

    Maslow's Pyramid Today

    Abraham Maslow, in his 1943 paper "A Theory of Human Motivation" and later in his 1954 book “Motivation and…

    7 条评论
  • 5G – hypothetical hype and realistic reality

    5G – hypothetical hype and realistic reality

    Unless you live an incredibly sheltered life, you have probably heard a lot about how great 5G is going to be. And…

  • Control and Management Planes - a new interpretation

    Control and Management Planes - a new interpretation

    In three consecutive blog posts on RAD's site I present a distinctive way of interpreting recent trends in networking…

  • The physical cost of virtual currency

    The physical cost of virtual currency

    Virtual currencies based on BlockChain technologies, and in particular BitCoin, have become extremely popular. Most…

社区洞察

其他会员也浏览了