Artificial Intelligence: The Journey into the Unknown - PART III
Vidya Munde-Mueller
SaaS Founder ?? | Speaker | AI Evangelist | Top 100+ Women Advancing AI in 2023 | Coaching Female Founders | Agile Coach | Design Thinking Expert
OP-ED: Vidya Munde-Müller and Sascha Lambert
We are in the middle of serious debate about human history approaching ‘singularity’. This is not science fiction anymore. AI technology has the potential to reshape our society within the next few decades. The reshaping already started with more and more automation that finds way into our workday life, e.g. voice assistants always available inside our smartphones. The authors of this op-ed intend to give their understanding of singularity and what kind of future may befall on humanity. Rather than choosing one or the other future, the authors want to show different possibilities of AI evolution using the lessons of human evolution.
AI: Utopia or Dystopia?
In the previous chapters (PART I and PART II) we discussed the possibility of creating AGIs or even Superintelligence and what that would entail. In this chapter, we will look into the consequences of creating such an AI. There are too many unknowns as detailed in the previous chapter. What if the reward function of an AGI is not designed correctly? What about the many ethical issues associated with deploying AI technology to act autonomously? This is especially true when AI systems interact with each other and not necessarily with humans? The chances for unpredictability increase if AI can build other AIs or do certain self-modifications themselves. We know the experiment at Facebook where two chatbots started chatting with each other in a strange language raising some fears within the development team members.
The fear is that AIs could gain possible advantage if they have access to internet and can use this knowledge to interpret what humans do and predict their future behavior, thus manipulating them. Additionally, by networking within the machines, they can learn from each other and expand their horizon. In that case they no longer specialize in a problem but also gain knowledge of other areas. The machines could make this ‘mental leap’ and apply their and available contextual knowledge in any text to unknown situations or adapt to new situations like humans.
There could be potential malignant failure modes where AI finds an unanticipated and pathological way to do what is it was asked to do by the program. Take the example of happy customers. AI could literally try to make the customers happy by spreading some sort of drugs in the products and thus make humans more dependent on those products. A flaw in the reward function of a super intelligence AI could lead to catastrophe. Indeed such a flaw could very well mean the difference between utopian future and a dystopian future.
"The real risk with AI isn't malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.” - Stephen Hawking
The Real and Imminent Danger of AI
Let us examine some of the most important dangers of AI on military and jobs. Naturally there are many other fields affected by AI but these two present an urgent and clear danger. In case of military the important question is what can we do today to avoid some power hungry government or greedy corporation or even some individual creating and then losing control of an exponentially self-improving, resource-hungry AI? The governments worry that maybe the other side will get to autonomous weapons and therefore there is a fear that they would try to get there first. According to Schmidhuber, about 95 percent of all AI research is economic and about enhancing the human life by making humans live longer, healthier and happier so he does not worry about this scenario as he says at the end benevolent AIs are good for business. “In principle, you shouldn’t worry about that because profits are in selling to you an AI that is good for you.”
The situation is more dire when it comes to job market. Although there are some areas where humans can still outperform machines like creativity and entrepreneurship, there could be bigger impact on the job market. According to Andrew Ng and Jeff Bezos. In the past we could argue that new technology created as many jobs as they threaten. We shifted from agriculture to mechanization, automation and towards service industries, education and healthcare. But with AI many more professions are vulnerable and improvement in robotics can threaten many manual labor jobs.
On the positive note, this could also be an era of innovation and cultural expression where people are no longer tied down to need to work. But for that politics and society has to open up new paths such as ‘Universal Basic Income’ to make this possible. Otherwise there will be accumulation of wealth, power and resources in fewer hands.
Safeguards for Humanity
We need to take the risk of a malignant or rogue AI even if very small, seriously because so much is at stake. Some portion of humanity’s resources should be used studying the probable scenarios and finding ways to avoid them. This is similar common sense strategy like insuring a house against fire or water damage if the house is near waterfront.
If we want safe superintelligence than we could try limiting AIs power and tuning its reward function. We could intertwine the reward function with a set of moral and ethical constraints. Although it sounds very promising, it is very difficult to program moral constraints in algorithms. For that we need to answer the questions about teaching machines morality and ethics. We also need to prevent that machines can bypass the imposed safeguards because an even higher goal eliminates these protections from the machine’s point of view. Some researchers believe that one of the first things a super intelligent AI will do is to protect itself. This might mean that it will try to ignore every safety functions that we implemented to it by re-write its own code.
Would AI be capable of moral judgements and should it be held responsible for its actions? Will there be a time when machines also do have rights in our society? So in brief, ethics and morals are key factors but very difficult to implement.
Naturally we could try to add some limitations on the physical abilities of AI or developing it to be just an Oracle AI with predictions, without any active implementation ability. But there are reasons to believe that only a fully empowered superintelligence is the best option for utilizing the power of AI as experts point out. Even if we can make sure that the first super intelligent AI is limited in terms of physical abilities it could use other AIs via internet to capture a factory for military machines or robots and to take over control of production.
There is another aspect to consider when thinking of safeguards on AI. Our own behavior! It takes aim at the overly-digitalized world that we live in today. We are more dependent on our mobile phones than ever before. Just count the people watching their phones on your next family celebration.
Who wouldn’t like a wise, all-seeing and all-knowing AI in their lives to answer questions, give advice, take actions, etc. There could be an ‘Ambient AI’ which is able to seamlessly migrate between the different devices, accompanying us at home and at work, offering advice, answering our questions and on the top, being more like a human buddy. The danger is that this kind of behavior will infantilize users and render them less capable of deciding for themselves. This could open them to manipulation and exploitation. Remember the film, ‘Eagle Eye’ with Shia Laboef? It is kind of what it could look like. Tech companies like Google, Facebook, Twitter have a lot of user data with their personal habits and details. They already use this data to manipulate us in some ways by predicting what we want to buy tomorrow. If nothing serious is undertaken to curb this, more serious challenges could emerge. The prediction engine could be used to control which news outlets we follow, whose opinions we read and which political party we vote for just like Cambridge Analytica did it in 2016. If we use too much of AI to guide us through life, then whoever owns the technology could use that to exercise control on the largely unsuspecting population. We need to be cautious when using AI where decisions are done in a blink of a second such as in algorithmic trading or high-frequency trading
Some of the greatest brains on earth like Stephan Hawking warned mankind of misusing AI. When we shift our view to China some of us might be surprised to hear that people there are already controlled by a system called “Social Credits”. Chinese government plans to fully implement this program until 2020. Within this program every individual of the Chinese society is controlled and big-brothered. People gain social credits for wanted actions and loose credits points for unwanted actions. Where you live, what you work, with whom you are related is influencing your score. This already leads to situations where people from poorer urban regions have to pay much more for flight tickets than someone from a rich district in Hong-Kong. Is this the future we are longing for? Are we still on the right path? How do we ensure that we don’t overreach? It is always a good advice to take a rest and rethink your doings. It is of highest importance to set the right course today. A world presented in the TV series Black Mirror or the activities in countries like China are not really worth to aim for. Yes, you can do cool stuff with AI but do it with brain.
One Last Thing…
Humanity faces bigger problems like disease, hunger, climate change, poverty, etc. and these problems require more intelligence to solve. Human life could be greatly improved if machine intelligence were unleashed on these problems. AI could be a tool for abstract-thinking and tool-building Homo sapiens that we are. If we could perceive it as a tool for creating a better future for the mankind than it would start a true evolution to unbound intelligence. After all humans are biased creatures and AI could help us overcome human handicaps.
But for that we have to be brave. After all, those who do not dare, do not win. But we can’t ignore the fact that AI offers so many options but these could be dangerous too. For example, a flaw in the reward function of superintelligent AI could prove catastrophic for us. Such a flaw could mean the difference between a utopian future of cosmic expansion and unending plenty, and a dystopian future of endless horror, perhaps even extinction. If AIs build other AIs or outcome of self-modification or artificial evolution than its potential inscrutability would be all the greater. Whether this ultimately leads to misanthropic robotic armies, we do not know today and hope not. But what should we all do today so that we can get the most out of technology and avoid the negative in it? Most importantly there is a universal truth in the quote below:
“It’s not artificial intelligence I’m worried about, it’s human stupidity.” - Neil Jacobstein
AI can help us realize our boldest dreams but we need to put right safeguards in place before an intelligence explosion can occur, otherwise we may not survive it. If we can bring AI to embody our values, we may very well be on a path towards an utopian future even spreading artificial life among stars, eventually to fill galaxy with intelligence and consciousness.
-----------------------------------------------------
About the Authors:
Vidya Munde-Müller is the Founder of Givetastic.org (Giving. Made Fantastic) and Women in AI Ambassador, Germany
Sascha Lambert is the Business Owner of Artificial Intelligence at Deutsche Telekom IT and Co-lead of AI Community at Deutsche Telekom
Industry Analyst, Board Member, Technology, Economics & Strategy Advisor.
5 年Actually “we” do NOT know the following “We know the experiment at Facebook where two chatbots started chatting with each other in a strange language raising some fears within the development team members.” ... This has been repeatedly refuted by FB and the FAIR researchers of the https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/ blog as a clickbait story without any basis in reality ... Please ... This is simply not a true story ... it’s by now ancient Fake News that really shouldn’t be repeated imo. See also https://towardsdatascience.com/the-truth-behind-facebook-ai-inventing-a-new-language-37c5d680e5a7
SaaS Founder ?? | Speaker | AI Evangelist | Top 100+ Women Advancing AI in 2023 | Coaching Female Founders | Agile Coach | Design Thinking Expert
5 年Sascha Lambert?