DeepMind's New Alpha Go Has Changed AI

DeepMind's New Alpha Go Has Changed AI

DeepMind's new Go playing technology could change everything

Artificial Intelligence (AI) is almost old hat now, it is not some far off Jetsons-esque future technology, it is here amongst us. It tells us what to buy, it answers our questions on the other end of customer care lines, and it even helps us control our homes. However, it has always had one major limitation in how it operates: it is learning from humans.

Up until this point AI has learnt to act like a human because it learns within the realms of human activity and human possibility. For instance, DeepMind's AlphaGo, which in May 2017 beat Ke Jie, the world’s best player of the ancient Chinese board game Go, for the third time, with its skills learnt exclusively from the actions of humans across millions of games. This meant that the only moves that AlphaGo used had been previously used by humans, which despite making it the best Go player in the world, still limited it.

However, in early October 2017 DeepMind revealed that they had created AlphaGo Zero, which taught itself completely, with the only human interaction being to programme it to understand the basic rules of the game. This meant that rather than looking at what humans had previously done, it played itself 4.9 million times across 3 days to learn the best ways of playing. David Silver, the lead researcher on the project, said 'Humankind has accumulated Go knowledge from millions of games played over thousands of years...in the space of a few days... AlphaGo Zero was agile enough to rediscover much of this knowledge, as well as novel strategies that provide new insights in the oldest of games.'

Essentially, AlphaGo Zero learnt more in 3 days of playing the game than the entire human species has done in it's millions of years of history. DeepMind then set the original AlphaGo, which exclusively used human knowledge, against AlphaGo Zero, which was completely self taught, in 100 games and AlphaGo Zero won 100-0.

Although playing Go, is, in itself, a relatively pointless exercise, the repercussions of this are potentially huge.

Go is essentially a game of patterns and strategies, which is the same for almost everything we create today, whether that's the construction of a building or diagnosing a disease. So although at present DeepMind have done nothing more than prove it can play a game, the implications are much bigger.

For instance, this kind of technology has the potential to change the speed and accuracy of diagnosis forever. At present, the way this is predominantly done is through a doctor looking at a limited number of symptoms and then making an assessment based on what they've seen before or what others have seen in the past. However, there is every possibility that small changes that haven't been attributed to the disease could help to give better diagnosis. It could even lead to pre-emptive treatment. If AI can see that your blood sugar is a certain level on one day, then 6 months in the future you could develop a specific disease, for instance, this could have massive ramifications for how that disease is spotted. There has already been the use of AI in this space, but it uses the human form of diagnosis. For instance, DeepMind used thousands of images of diseased eyes and their diagnosis to teach its systems to quickly detect eye disease, but it was only looking at eye disease diagnosis that had been previously found by humans, which puts it within human limitations.

At present these kind of uses are only the tip of the theoretical iceberg and rather than resting on their laurels, the team at DeepMind have already begun looking at its potential use in the future. DeepMind CEO Demis Hassabis referred to one particular use when he said 'The team are already working to apply this to scientific problems like protein-folding.'

However, this is not going to be as simple as just letting AI take the reigns and solve these huge problems. One of the most complex things about AI is that it needs to have parameters set, something which is easy within Go, but considerably more difficult when you begin adding more diverse elements. For instance, climate science seems to be an area where this technology will have a huge impact, given its ability to spot new patterns that humans have missed. However, there are so many huge potential variables that trying to set parameters is incredibly difficult. For instance, there are the big holistic elements, such as the amount of greenhouse gas created by cars, but then there are incredibly oblique factors, like the length of roots within the average wheat plant, which impacts how much CO2 they can absorb.

Factoring in potentially millions of variables is difficult not only in that it is a huge amount of work, but also in that these variables are themselves set by humans. This means that although the AI may well find new patterns and new ways to do things within these parameters, it is entirely possible that our human brains can only set parameters that limit the potential for what could be done in our minds, rather than the reality of what could be achieved.

It also opens up AI to the kind of ethical questions that people like Steven Hawking and Elon Musk have been warning about, given how much more powerful this is than regular human thinking. We saw in July that Facebook needed to shut down an AI program because it had created its own language which meant that people couldn't understand how the systems were communicating or what they were communicating about. If this system can learn thousands of years worth of Go strategies in 3 days, the potential for it to learn about military strategies or something equally as destructive is entirely possible. For instance, if an AI system were to look at the optimal way to prevent global warming the simplest answer would be to stop humans producing greenhouse gases, so simply destroy all humans.

This is of course slightly hyperbolic thinking after an AI system essentially just got really good at a board game. However, the reality is that when we allow AI to think outside the box, we are inviting both amazing solution to the problems we currently face, but also open up the possibility of creating many more.

This article was originally published on the Big Data Channel

要查看或添加评论,请登录

社区洞察

其他会员也浏览了