Predicting The Future of AI

A Google search of “Artificial Intelligence predictions” will give you more than 30 Million search results. Not all of them are relevant and that is why to serve you better, they are arranged and curated based on your taste (amazingly by an ML algorithm).

From finding the right job, the right home or your life partner, or ordering food, or booking a car, or for just planning a vacation, AI now exists everywhere, even if we don’t always realize it.

Don’t be surprised if you see AI writing a script for the next blockbuster hit or performing stand-up.

On the question of machines being able to think, Alan Turing proposed a test as a reductio ad absurdum argument against people who claimed a machine could never think. This became known as the Turing Test where a machine has to demonstrate behavior indistinguishable from that of a human being. We now have chat bots and sentiment analysis tools that do demonstrate human-like behavior. But in the strictest sense, they can’t think like humans. That is why you will find people still arguing over whether the Turing Test is passed or not. Interestingly, Turing didn’t propose this test as a real benchmark for AI.

But we humans like to debate. And because of the profound effects that advancements in AI can have on our society and the general quality of life, there is no shortage of predictions from “experts”. But, right now I am more interested in better understanding the acts of prediction. As in, when have our predictions for AI come true in the past and what can we learn from them. This might help us better understand that what makes our predictions work (a bit of back propagation, if you will).

No alt text provided for this image

Learning From Experience

1956-1973 saw the first AI hype. In fact, in 1956, a bunch of brainiacs thought they could crack the challenge of artificial intelligence in a couple of months. The Dartmouth Conference had predicted that they will solve problems such as getting machines to use language, form abstract concepts and even improve themselves. That obviously didn’t happen. But there were numerous predictions made (and arguably few advancements).

In the early 1980s, people were predicting the rise of narrow-purpose “expert systems” but not High-Level Machine Intelligence i.e., the rise of weak AI and not strong AI. Even that didn’t happen on the predicted time scale.

The 90s saw some good research but the period wasn't very different in performance when it comes to predictions coming true. However, we had more individual forecasters in comparison to previous time periods. Many correct individual forecasts after 1990 have had to do more with technological capabilities rather than solutions [more on that here] This states that it is easier to predict that within 10 years we’ll be able to do X amount of M type of computation for $Y than it is to predict which particular computing architecture we’ll be using to do X amount of computation in 10 years.

Stuart Armstrong, a research fellow at the Future of Humanity Institute at the University of Oxford actually analyzed 250 AI predictions collected by the Future of Humanity Institute. Interestingly, according to him, philosophers are more accurate than sociologists or even computer scientists in predicting HL trends.

"We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta-level are very likely to be right,” says Armstrong.

It is possible to make better predictions than what is basically just "gut instinct", he says, if you "decompose the problem by saying we need this feature or that feature and then give estimates for each step".

What’s Next?

To conclude, it is better to trust a prediction that deals with short-term trends and specific problem statements or a general industry trend with no bias towards specific architectures. For example, instead of betting on Deep Reinforcement Learning studies as the key to strong AI in 20 years paving way to a $5Tn market (or complete collapse of our value system), you are better off betting on self-driving cars using some type of reinforcement learning (or swarm AI) and capturing market share worth $20Bn in the next 5 years.

Given that Gartner’s short-term predictions about the impact of AI and industry trends in 2017 and 2018 have not been far from reality, here are their predictions for the next five years:

No alt text provided for this image


要查看或添加评论,请登录

Rakshit Kalra的更多文章

  • Linking Blockchain To The Real World

    Linking Blockchain To The Real World

    Even after a decade since the creation of blockchain, it’s hard for people to differentiate it from cryptocurrency…

社区洞察

其他会员也浏览了