ARTIFICIAL INTELLIGENCE AGAINST NATURAL STUPIDITY?
Michael C. Rubin
Helping the Energy Industry with AI & Data Science | Data Scientist MIT | Advisor & Investor
How Will AI Change Our Lives by 2030 - from the MIT AI LATAM Summit
From MIT Campus, Cambridge, MA, on 21.1.2020 / 15 min. reading
After a turbulent decade full of political turmoils, social revolutions, economic peculiarities, technological breakthroughs and natural disasters, we head started into a new decade. Busy as usual. I had the pleasure to start my decade at the Massachusetts Institute of Technology, participating in the first Artificial Intelligence Summit for Latin America. Dozens of academics, technology leaders, entrepreneurs, scientists and thinkers laid out how Artificial Intelligence will change our society in the next 10 years - truly mind-boggling! Personally, I think that after the 2010's gave us a 'slap in our face' in many regards, the next decade will bring a lot of positive evolutions and solutions, driven by Artificial Intelligence. Here are eight predictions of how AI will change our lives by 2030:
1. Democratization or Digital Dictatorship: The Race to the Bottom of our Brain Steam
We have stone age instincts, medieval institutions and 21st century techno-commercial strategies. This is how an MIT scientist describes the root cause of most of our current big challenges. The described problems of the past decade like polarization, post-truth, vanity, economic bubble burst or inequality seem to be separate problems. But are they disconnected? Or is there one common root cause of all of them?
The emergence of social media and limitless information availability has brought great progresses. The Arab spring, the Latin American anti-corruption movement, gender rights and the anti-establishment movements are all triggered by the democratization of information and these are certainly good things to happen. Much like the invention of the printing press in year 1440 made public spread of knowledge possible and laid the fundament for the enlightenment, social media make information generation democratic and will give rise to another big social transformation. We are just in the very beginning of this emergence and we do not yet know how it will look like (the enlightenment took 200 years to take off). What we see until now is that many autocratic systems, be it Dictators in Arabia, corrupted governments in Brazil or Monopolistic oil companies, were surprised in the social movement. However, they are smart and learned quickly to use the technology to their favour. It is no coincidence that the Russians, whose entire government (Putin) and economy (Oil) is based on autocracy, were the ones manipulating people to bring Trump to power. In the same way, it’s unsurprising that the Chinese Communist Party strive for leadership in AI technology - their existence depends on its control. In Europe and elsewhere, Populists have recently been very successful in social media campaigns (Trump, Bolsonaro, Brexit and Salvini).
To understand what’s happening and were we are going, we need to understand the role of the social media itself. The business model of Facebook, Amazon, Google and Co is based on only one single objective: Accessing the bottom of our brain steam, where our stone-age instinct resides. Once they understand our deepest fears, our most profound emotions and desires, it’s an easy game to control our actions. Manipulation has always existed. What is new, however, are two things: First, they can now access any kind of personal data in real time, from favorite songs to our food desires to whom we date to heartbeats. Analyzing these data gives them a very profound insight to your deepest feelings. The famous example when Amazon knew of a woman being pregnant before she herself knew it shows that it's not uncommon that they know us better than we do ourselves. Second, they can do this in a big mass. Once the algorithms are developed, it’s just applying them to 2 billion Facebook users, and you’ll have the perfect manipulation receipt for entire populations. Of course, it is extremely attractive for any power-hungry person or business to have access to this receipts, be it an American presidential candidate, be it a commercial company, or be it an old, rich banker who wishes to date a young attractive lady! The famous Israeli historian Yuval Noah Harari calls this “the ability to hack humans”. One doesn’t need anymore physical power over somebody in order to control him. A worrisome outlook to the future.
Even though I think this is certainly humanity’s biggest technology challenge in the next decade, I am an optimist and want to give a positive outlook. At the MIT, I was positively surprised about the thinking of the smart, young people. Almost without any exception, they intend to use their valuable knowledge for the good of humanity, rather than for purely monetary interests. Mostly, people studying at these kind of universities are or will be well off in their lives and have no need to maximize their income at any cost. Hence, they strive for higher goals, such as solving people’s problem, solving climate change and other important challenges. Indeed, Maslow postulates that once you have covered your basic needs, you want to become the most you can be. I don’t think that this will motivate many people to ‘hack humans for a dictator’s purposes’. I think this mindset of young tech talents will ensure that AI technology brings more good than bad things.
I’ll be speaking in German, while my counterpart will understand it in Mandarin, fact checked and culturally decoded.
One concrete product I imagine we will be using by 2030 is the Google Interpreter. I'll be speaking in German, while my counterpart will understand it in Mandarin, fact checked and culturally decoded. Information interpretation will be in the responsibility of each listener. Trump + Co can talk whatever they want, I’ll just hear a long and continuous ‘beeep’. Facebook + Co will proactively support democracy. In the end, this is the system in which they flourish, and they will certainly not cut their own fundament. Here more background on the topic:
https://www.youtube.com/watch?v=Z8guBsLhVvM
2. We learn from Machines
There are now musical artists, who use AI to help writing their songs because the algorithm's ‘creativity’ helps them to explore new artistic territory! Think about that for a moment. Algorithms are more creative than artists. Creativity and Art have so far been considered to be among the last territories AI will conquer, as Mark Tegmark’s sinking real-world landscape shows. In Chess, Go, Car Driving, Accounting, Vision, AI has already left humanity behind. Sectors like translation, administration, basic management, sales, decision taking, virtual assistance will come soon. Even though I personally don’t believe in an AI supremacy anytime in the next 30 years, in well-defined tasks, the algorithms are getting better than humans very fast. By 2030, few tasks will remain the human’s exclusive domain. The important detail is that the algorithm’s domains will still be confined by relatively narrow task definitions. Machines won’t be able to take decisions across domains and hence, humans maintain their position. However, that does not mean that we cannot learn from algorithms within defined task limits. Algorithms can certainly teach us how to take certain decisions as they have much better information at hand. Also, they will protect us from fraud and other risks. In the healthcare industry, final decisions might still be taken by doctors (mainly for ethical reasons), but AI will teach MD's to detect cancer cells, discover the right medicine campaign and apply the right therapy. In management, Director’s boards certainly stay in power, however, algorithms (instead of Controllers and Marketing Experts) will teach the committee about the best financial and marketing strategies. Finally, the pop industry will accelerate and bring up a battalion of standardized young talents, who learn their stage performance and songs from machines. AI's talent instead of God's talent. More on the subject:
https://www.youtube.com/watch?v=BtPL8VEC3rk
3. Artificial Cultural Ethics
A self-driving car with two Masters’ students on board is approaching a crossroad. All of a sudden, a pregnant women with her family crosses the street unexpectedly and there is not way to break the car down. The only way not to overrun the women is to steer the car into the wall. In a matter is milliseconds, the board computer calculates that this manoeuvre would cost the lives of the students. If the car avoided the wall, the entire family would be killed. What should the machine do? What if the pedestrian was a condemned criminal on parole? Or a 85-year-old men with depression and the desire to die?
Human beings often take decisions based on instinct or pure reaction, or can't even take any decision at all. Accidents just happen. If we hand over such decisions to fast-processing machines, chances are that the machine does have the time to calculate the outcomes of different scenarios so it can decide between them. This can bring its programmer into a difficult situation. Moral values, ethics and culture are important fundaments for decision taking and maybe, we don't wish that machines take every decision on pure numerical reasoning. So, what Moral values should one code into the algorithms?
Another aspect of the same problem is the question of responsibility. If an intelligent, autonomous machine causes damage to somebody, who can be held responsible? Its owner because he earns money with the machine? Its manager because he is in charge of the operation? The machine itself, because it’s an independent, intelligent unit? Or nobody because it’s the user’s own responsibility if he interacts with a machine?
A third related situation is that security and police forces start using more and more image recognition algorithms to distinguish between criminals and others. Deep Learning models analyse pictures from public cameras and rate the likelihood of somebody going to commit a crime in the near future. An enormous progress for public safety as police forces can act preventively. However, what happens of the algorithm rates black people systematically higher than white people? Are police forces allowed to use these ‘racial’ machines in favour of public safety?
By 2030, each platform service will have an API linked to a central authority server to query moral decisions in real time.
There is no question that this area of moral and ethics for machines is a huge, open field. The problem though is that such moral sets are fundamentally based on cultural values. Technology services, on the other hand, tend to be global. There is only one Facebook, one Uber, one Google. Will all these services adjust their algorithms to each country? Or will lawmakers impose them a tight corset of rules? How do we know what our ethics and values are in the first place? My prediction is that the field of Artificial Cultural Ethics will become a large field of study, public debate and development in the next 10 years. Large countries with distinguished cultures like Brazil, Japan, India, Germany or France will have profound public debates on this. The liberal countries like the USA and UK might leave it to the market and in China and the Arab world, the authorities will be only too happy to hardcode their ideas of ethics into the machines. By 2030, each platform service will have an API linked to a central authority server to query moral decisions in real time. For interested people, here is the MIT pioneer project, called Moral Machines:
4. The Last Days of Big Data
In the past decades, the battle for AI supremacy took place on the field of Big Data and supercomputing. The common conjecture was that the more data I have and the more complex I can transform them, the better they would represent the real world and the closer the model comes to human intelligence. Indeed, advancements were impressive, especially in the field of image and pattern recognition. China is collecting systematically images of all its citizens – billions of images – to train perfect supervision algorithms. Recently, Google has reached Quantum supremacy, a step that would catapult computing power for orders of magnitudes. However, such so-called ‘Brute Force’ algorithms are increasingly hitting the limits of their nature. Their reasoning happens in a narrow and well-defined field and we have not been able to generalize it to a broader set of tasks. Labeling cost are getting out of the range of the feasible and the proper quality of the data might also be an issue (shit in, ship out). So what is the point of spending millions for collection, labeling and processing of ever more data, just to get the accuracy of the prediction from 98% to 99%?
AI researchers at leading institutions, such as the Massachusetts Institute of Technology or Google DeepMinds, know since some time that the ‘Brute Force’ approach is not the one leading us to human level artificial intelligence. Often unheard in the commercial marketing noise of Big Data, they developed a series of promising alternative approaches, which could be the next big leap. Here we go with some approaches:
Microsoft Research has developed an approach called Machine Teaching. In Machine Learning, we typically feed the algorithm loads of labeled data, which just tell it if it’s prediction is correct or not. Machine Teaching, on the other hand, envisions the human-like teaching approach. First, they do give the algorithm some labeled examples. The prediction may contain many correctly classified and some incorrectly classified results. As a second step, they focus on the wrong predicted examples and decompose them into segments. They relabel the individual segments on which the algorithm bases its wrong classification and relabel these segments. The structure of the data set to be classified plays an important role. Finally, they aggregate the decomposed segments again and re-run the complete classification model. Often, it reaches near perfect accuracy with a fraction of the data you’d need for classical supervised models. Here more details from Microsoft Research:
https://www.microsoft.com/en-us/research/video/machine-teaching-overview/
https://www.microsoft.com/en-us/research/video/machine-teaching-demo/
The research stream responsible for the latest big breakthroughs is Multi-Agent Reinforcement Learning (MARL). Reinforcement Learning (RL) is a technique to allow an agent to take actions and dynamically interact with an environment so as to maximize the total rewards. MARL focuses on models that include multiple agents that learn by dynamically interacting with their environment and themselves, i.e. environment is subjected to the actions of all agents. Google’s Alpha Go was set up in that way. The only knowledge the Agents were programmed were the rules of the Game GO. Then they let the two agents play against each other, each with the goal of maximizing its winning points. This approach is also called ‘Self-Play’. After some millions of parties, the agents mutually trained each other to the point that Alpha Go not only beat the world champion of Go, but also came up with completely new game tactics, previously unknown and considered extremely smart and creative. Similarly, OpenAI set up in 2019 a simple environment with objects, where the agents had to play hide and seek against each other. For the astonishment of the researchers, after many rounds, the agents learned to use simple objects as tools so to be more successful in hiding and seeking respectively. What is remarkable is that these algorithms don’t need any data to learn. The only information they get are the rules of how the environment works (i.e. game rules). Here is a short video showing the experiment:
https://www.youtube.com/watch?v=kopoLzvh5jY
Scientists see this approach to come close to the evolution of life. Related to this is a field called Meta-Learning, which extends this approach to an evolution and generation based method. Meta-Learning differentiates between an intra-life learning and an inter-life learning. In the intra-life learning, many agents with different characteristics (no of nodes, regularization parameters etc..) optimize themselves and achieve scores. After this, the superordinate inter-life algorithm simulates ‘evolution’, where the successful models become parent algorithms of a new model, which inherit some characteristics each of the parents. The algorithms can consistently compete against each other so to optimize their own and their children’s performance. Again, I don’t need big data sets for this algorithms. Here a detailed presentation from MIT:
https://www.youtube.com/watch?v=9EN_HoEk3KY&t=2617s
Lately, the MIT CSAIL developed a new approach for computer vision and logical abstraction. They combined the old field of Program Synthesis with modern Deep Learning techniques. The challenge is to identify from hand written symbols the underlying structure like symmetry and repetition. Once this logic structure is identified, the computer program can then extend, manipulate and correct it. This is a major step in AI, because traditionally, computer vision has been able to identify items in images, however, the program ‘had no clue’ of what it is representing. It just found similarities between training examples and the images. This novel approach might be the fundament for an intelligent agent learning to reason logically about the environment. Again here, the main breakthrough did not come from more data processing, but from novel logical approach. Here the paper:
https://papers.nips.cc/paper/7845-learning-to-infer-graphics-programs-from-hand-drawn-images.pdf
the future does not lie in Brute Force algorithms, which process ever more data in a Black-Box algorithms.
If we reflect upon all these recent AI breakthroughs, it becomes apparent that the future does not lie in Brute Force algorithms, which process ever more data in a Black-Box algorithms. Rather, careful data selection, which represent the environment and the ability for the algorithm to abstract logical structures will become more and more important. Artificial Intelligence gets less dumb. My prediction is that by 2030, Big Data and their algorithms will be a niche only. The ability to select the RIGHT DATA and combine human with machine intelligence will be much more important.
5. Nelsonian Networks: The Final Word to Inequality and Poverty?
We saw a decade of fast GDP growth, soaring profits and record stock prices. Good times? Not for all. In major economies, such as the USA, China, Brazil, UK and India, middle class has not benefited from this growth, inequality continues rising and poverty is still a major issue. Technology and digital business models play an important role on that. The logic of the platform economy is ‘the winner takes it all’. The Network effect and the access to data can give a company a monopoly-like market dominance and tech companies and VC investors spend a lot of money to acquire that privileged position.
The role of ordinary people, users and workers are often degraded to pure consumers, exchangeable cheap labor or, most importantly, free data suppliers. This trend will accelerate in the next decade and will soon reach a level, which is no longer good. Not only that people will rise up against the unjust wealth distribution, but also will purchasing power start to shrink if middle class is thinned out. The companies will prepare their own grave. The debate of a tech tax, which is used to distribute a basic universal income is widely discussed and gained popular support, amongst other from Bill Gates. However, we also know that state regulations and tax burdens are not good for innovation and many economies might be reluctant to distract their most innovative businesses.
There is one option, which, in my opinion, did not yet gain the attention and discussion it deserves. The pricing of Data. If we think about the business model of Facebook, Google + Co, it appears that they actually get their main input material, the data, completely for free. Once they convince you to use their services or platforms, all data you generate belong to them. You just accept their terms – more exactly, you have to if you don’t want to get excluded of ‘your life’ by their monopoly power. This principle is totally against the idea of capitalism. If Facebook sells a well-paid add to a marketing firm, then their artificial intelligence learned through your data to target that add. Hence, your data is an input material for their final product and should have a price. Of course, Facebook must have a generous margin because that’s for what they develop their brilliant algorithms. The revenue you get for your data might be tiny nano-payments, but over the course of tens of thousands of services your data are used, it can add up to a basic income. But how is it possible to track and distribute that all? The answer is Nelsonian Networks. These are network with two-way links, where each node knows what other nodes are linked to it and thus preserve context and also create a structure necessary for compensation. With a two-way linking system, each person can remain the proprietary holder of any data that originates with them and sell this data directly to other people and companies using them.
With a Nelsonian Network, each person can remain the proprietor of any data that originates from them and sell this data directly to companies using them.
My prediction is that by 2030, we’ll see the first Nelsonian Networks appearing and starting an eco-system of more just data property rights. Of course, this would need some central regulation body. Probably, countries like Switzerland, Sweden and Finland will pioneer in this. I’m also almost sure that these countries will provide to its citizen an unconditional universal basic income to its citizens by 2030, be it paid via tech taxes or via Nelsonian Networks. Survival cannot be conditioned to economic performance in a world of limitless automation. The USA, lobbied by the tech giants, may resist this change and continue its big divide – the question is only until when. More on that here:
https://itp.nyu.edu/classes/ede-spring2014/who-owns-the-future/
6. Data Banking & Market
Data is the oil of the 21st century. Only that nobody cares about it. It’s astonishing how superficial data are managed given their importance.
Initially, there is no legal framework, which regulates the exploration of data. Everybody just explores whatever he can, often crossing the thin regulation existing. It’s like everybody would wildly drill holes into the ground to exploit some oil, regardless of whom the property belongs to. Certainly, that will change.
Second, data quality management is no real issue yet. Companies are just feeding the algorithms whatever comes, the more the better, and hope it learns well. But we know erroneous data can set-off the accuracy of entire data sets. It’s like fuelling cars with any mix from gasoline, water and other unknown substances and hoping to get to the destination before the car breaks down. To bring a jokey example, it’s actually fairly easy to manipulate my data and miss-profile on Google and Facebook. Just download ‘Fake GPS App, give it the rights to overwrite the HW GPS of the cell and set the pin to Bangalore, India. Few days later, you’ll get adds in for travel packages in South America - in Hindi of course. Certainly, that will change too.
Third, what actually happens if you get a data set on financial data of 1 million consumers, train a simple Support Vector Machine Classifier to predict credit default of future clients and you end up rejecting 90% of all Black people. Somebody sues you because your algorithm apparently learned racial features to classify. Apparently, that’s what was in the data set you got (maybe even paid for). Will the court hold you responsible for the algorithm’s behaviour? Can you charge back to the vendor of the biased data set? Can you get an insurance for such cases? Obviously, there will emerge a demand for a solution sooner or later.
Fourth, you can even think about a data crime business emerging, i.e. is exists already and is called dark net. Large hacker groups steel data from data bases, companies or individuals and sell it under hidden websites. If I want to train an algorithm, but can’t get the data I need, the only thing that hinders me to go to the dark net are moral values. Nobody would hold me responsible or could even trace where I got the data from – after training I just delete them and nobody could prove from where the black-box model learned. I’m not held responsible for using Black Data and Data Laundering is actually fairly easy. Of course, society will not accept this forever.
Whenever you have a machine learning project, instead of going ‘out in the virtual woods’ and hunt the data, you’ll just call you Data Broker.
If we think about these cases, we see that there is actually one missing piece in the market – the intermediator. In today’s economy dominated by financial streams, this role is occupied by Banks. My prediction is that by 2030, we’ll see the emergence of a Data Banking sector. These companies do pretty much what banks do today. They store data for us safely and pay us some interests – just link it with the part of Nelsonian Network – and lend the data to companies for training their models against a higher interest. Of course, they have very sophisticated and rigid quality control processes so they can make sure you’ll get good quality data. The Data Bank is also responsible that data sets are not biased in a social or racial manner and sells you an insurance for legal cases along with their data. Data sets will be certified and quality checked, and you won’t be able to sell your Machine Learning based services without a data certificate. Regulators will further oblige Data Banks to make sure that they can trace the originality of the data so to avoid the emergence of Black Data. We learned from the past. Thanks to Nelsonian Networks this is fairly easy. Hence, whenever you have a machine learning project, instead of going ‘out in the virtual woods’ and hunt the data, you’ll just call you Data Broker.
7. China or USA?
Hardly any week passes without a new Forbes or Financial Times article, apprising the current AI arms race between China and the USA. No doubt, China has an ambitious growth plan in the technology. But will it also become the worldwide AI leader? And will it ultimately dominate the Western world by its superior algorithms? I think most of these articles do not go beyond mainstream media news and do not withstand serious scrutiny. Here is why:
One of the main arguments in support of China’s superior strategy is that the country generates more data than anybody else, especially facial images with is wide surveillance infrastructure on 1,4 billion people. Therefore, so goes the reasoning, the Chines’ algorithms would be able to learn faster and better. For this reason, by 2030, China would be the AI superpower. There are two fallacies in this argument.
First, in order to run such large data sets efficiently you also need the computing power. At the moment, Google is clearly leading the race in this regard with its proprietary Tensor Processing Unit Technology. Google also reached Quantum Supremacy in 2019, i.e. their Quantum Computer solved a mathematical task in a matter of minutes, which would have taken a normal processor decades. Quantum Computers are considered the future of computing technology as their processing speed is exponentially larger (with each bit!) than silicon processors.
The second fallacy is more important though. While their big database might give the Chinese indeed an edge in the technology of facial recognition, this does not at all mean that they will automatically be the number one AI superpower. While image recognition and computer vision are certainly very important applications, it is still Artificial Narrow Intelligence. This means the algorithm is limited to only this specified task. All of these so-called brute force algorithms will lead to narrow AI only. The big breakthroughs of the last years like AlphaGo, MIT’s Program Synthesis and OpenAI’s Hide and Seek were NOT big-data driven, brute force algorithms. These cutting-edge models come close to a general intelligence, but most didn’t need a single set of training data.
Most cutting-edge algorithms did not need a single set of training data. From this perspective, the Chinese’ Data advantage is irrelevant.
From this perspective, the Chinese’ Data advantage is irrelevant. So far, I’ve not seen any big breakthrough coming from the Asian country. A possible explanation is the following: If you want to build super-human intelligence, you can’t use human intelligence. It simply limits itself. Capabilities like creativity, imagination and try and error are much more important, as already Albert Einstein knew. Cooperation, Innovation culture and open thinking are fundamental drivers of such capabilities and I don’t think that Shanghai’s big thinking factories will ever outcompete Silicon Valley or Massachusetts in these disciplines. I might be wrong, but my prediction is that the USA will still lead AI by 2030.
8. Future Working & Intelligent Social Programs
By the end of the next decade, the term work will be entirely different from today. We will see a constant rise of the Gig Economy. Due to open education, we’ll see a much bigger competition for top-jobs. Many people will be Freelancing, working for platforms and do their own business. For most, employment will be only a temporary station in their carrier and becoming a successful Gig will be a valid and common carrier path. I won’t go into details about the future of work, as I’ve written a separate essay about it some time ago. Here is the link:
https://www.dhirubhai.net/pulse/shift-up-your-gear-money-filter-fades-away-michael-c-rubin/
It is very likely that countries like Finland, Netherlands or Switzerland will have unconditional basic incomes for their citizen so to compensate against the job destruction by technology and automation.
What is to be added in the context of new Work is that most probably, modern governments will launch intelligent social programs to eradicate social problems. It is very likely that countries like Finland, Netherlands or Switzerland will have unconditional basic incomes for their citizen so to compensate against the job destruction by technology and automation. Emerging countries like Brazil and Peru might use AI to better target their social programs and finally eradicate poverty and social injustice. If one thinks out of the box, we can even go a step further. Why did the friendly idea of socialism (equality for all) fail? Mostly because a planned economy is very inefficient in resource allocation, given human inability to cope with the complexity of an economy. For an AI, this would be no problem. So, it is well possible, that in some countries, we will see a revival of the social state economy. This idea will obviously suffer bitter resistance from economic groups, especially in the context of American capitalism. However, also the American elites will have to recognize at some point, that the thinning out of the middle class will eventually lead to the collapse of THEIR market. My prediction is that by 2030, we’ll be in the midst of an intense global debate about the future of work and social organization, with no unanimous solution in place yet.
Ex Nilho Nilho fit
Follow us:
https://www.dhirubhai.net/company/28565689/admin/
www.drawdownlabs.com
Here some more interesting sources regarding the next decade:
- https://www.weforum.org/agenda/2019/10/future-predictions-what-if-get-things-right-visions-for-2030/
- Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
- Homo Deus by Yuval Noah Harari
- Utopien für Realisten by Rutger Bergman