The Beauty of Move 37 (a short history of A.I. and a glimpse into the future)

The Beauty of Move 37 (a short history of A.I. and a glimpse into the future)

At 35, Lee Sedol is a veteran of the Korean Go league. He has played the game professionally for much of his life and has racked up 18 international championship titles, the second most in history. This is no mean feat. Go is an ancient game and the most complex humanity has invented - far more so than chess. To give you an idea, after two moves in chess there are 400 possible next moves - for Go this number is closer to 130,000. And after every move the number increases a hundred fold, essentially creating an almost infinite number of possibilities. To achieve this global level of success in Go requires not only a high level of intelligence and practice but also intuition, creativity and strategic thinking. So it was no surprise that when Lee Sedol was challenged by Google Deep Minds AlphaGo team he was fairly certain that he would win. In a pre-match interview a month before the first match Lee remarked, “I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time." The experts at the time agreed with Lee, with the consensus being that it would take another 10 years before AI could beat a human grandmaster. The result is now part of AI lore, the 4-1 win by AlphaGo over the world champion was comprehensive and firmly established that AI had arrived. But this is not a story of how AI beat the human - this is the story of how AI made the human better.

The turning point in the 5 match series happened during game 2. When the AI played move 37 it stunned the professionals commentating on the game. The move was so surprising and so elegant that in his post-match interview Lee Sedol described it as "beautiful". Michael Redmond, a 9th Dan professional Go player, called it "unique" and "creative". The move demonstrated just how far artificial intelligence had come, not only in terms of sheer computational power but also in the mysterious talent of creativity. The machine played a move that had not been seen before in the context of the game, a move that dumbfounded professional players and eventually led to winning the second game.

But two games later Lee Sedol played a move that was equally as sublime as Move 37. A move that was just as beautiful as the move made by his AI machine opponent leading to some Go followers calling it "Gods Touch". Lee would later say that it is unlikely he would have made that move had he not seen AlphaGo make move 37. And in one moment we were shown that while machines are now capable of elements of genius and creativity, humans are just as capable of creating their own moments of sublime beauty. In fact, Lee Sedols humanness was somehow expanded by move 37. When the machine showed him the beauty of that move it awakened something inside Lee, it awakened a creativity that spurred him to be more than he had been before.

To quote Wired Magazine "The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver [leads of the AlphaGo team] and their fellow researchers have built a machine capable of something super-human. But at the same time, it's flawed. It can't do everything we humans can do. In fact, it can't even come close. It can't carry on a conversation. It can't play charades. It can't pass an eight grade science test. It can't account for God's Touch."

So, does this mean that A.I. is ready to take over the world? Are the robots coming to take our jobs? The answer is Yes and No. The fact that A.I. is powerful enough to beat a GO grandmaster is testament to how far machine learning has come but it still has a long way to go before it can reach the level of human intelligence. The concept of artificial intelligence has been around for a long time. The philosopher Descartes in his 1637 Discourse on the Method wrote “For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on.” But the concept really took off with the advent of computers leading to Artificial Intelligence being formed as an academic discipline in the mid-1950s.

Initially A.I. was created using a Symbolic approach. A machine would be fed a series of symbols and each would be labelled. If you wanted to build a machine that could translate English into French you would program into it every single English word and every single French word as well as every English and French rule of grammar. When you typed in the English phrase the machine would have to find the matching French words and apply the necessary grammatical rules to produce a French translation. While this sounds simple in practice it had two fundamental problems. First, it required a massive amount of work to program in every single piece of data required and secondly it only works well for subjects that are highly rule based (think mathematics or chess). When it comes to messier things like language (that has almost as many exceptions as rules and relies heavily on context and meaning) it performs less well. This approach is sometimes referred to, perjoratively, as “Good Old Fashioned A.I.” or GOFAI).

In the late 2000’s a small team of Google engineers and researchers began experimenting with a different approach to artificial intelligence using statistical techniques that would enable computers to learn bottom-up from data rather than top-down from rules. The group was called Google Brain and would eventually publish a document that has become known as “The Cat Paper”. The Google Brain team explained how they used artificial neural networks and statistical deep learning models to identify faces of cats. Using a neural network with 1 billion connections and 10 million images of cats (stills taken from an apparently infinite number of YouTube cat videos), they were able to get the computer to learn to identify cat faces. Even more amazing was that they proved the possibility of unsupervised learning in which the machine is fed the data without any labels but learns the definition of a cats face on its own! The team would later apply this incredible technology to Google Translate, making it arguably the best translation app on the planet and leading to Google’s CEO to proclaim that the company would take an AI first approach to everything. Machine Learning would in future power all of its products and services from smartphones to personal assistants. (You can read the full story of Google Brain in the brilliantly written, seminal article in the New York Times - The Great A.I. Awakening).

Machine and Deep Learning based A.I. is everywhere and continues to grow in its reach every day. It is much more than just a technology tool and is fundamentally impacting the way in which the world works and lives. These systems are able to learn, identify, cluster, predict and prescribe and they are fast becoming a part of every business. From Financial Services to Healthcare A.I. is being deployed to perform tasks more accurately, faster and cheaper than humans ever could. In many cases these systems are also beginning to make recommendations for courses of action and sometimes even make decisions. In a recent interview with the BBC, startup Optellum’s CTO, Timor Kadir explained how their A.I. diagnostic system analyses clumps of lung cells in scans and is able to diagnose cancer earlier than doctors can. Kadir believes that healthcare costs could be lowered by as much as $13.5 billion if the system were adopted in the US and Europe. AI systems are already being used by Financial services firms to balance clients investment portfolios and help them make investment decisions. Leading research firm Juniper Research recently released a paper estimating that fully-automated Roboadvisors will manage nearly $1 Trillion in assets by 2022, growing at 154% per year.

The real power of A.I. will manifest when the systems are trusted to make decisions. Already, AI-based solutions help fashion stylists at San Francisco-based Stitch Fix curate customers’ outfits, and assist claims adjustors at Ant Financial Insurance in China in making insurance payout decisions. As more decision making is given to A.I. systems their impact on human lives will grow. With this power will come great responsibility - but whose responsibility exactly? I would argue that the responsibility lies with those people who are charged with teaching these A.I. models. Machine Learning requires that the machine be trained and it is imperative that this training creates good A.I. In 2016 Microsoft released an A.I. powered bot called Tay that would respond to Tweets and chats, with the aim to research conversational understanding. Twitter users began to realise Tay would learn from their messages and adapt its own responses from this learning. It didn’t take long for the worst of the internet to start teaching Tay racist phrases which the A.I. then repeated in its own words. Microsoft shut it down after 16 hours. Technology is not innately good or bad but it can be taught to reflect the worst of ourselves. Imagine a bank making A.I. based loan decisions that was taught with historical data that had a built in racial bias or a recruitment app makin hiring decisions based on historical data with built in gender bias.

As businesses and governments move from systems that are programmed to systems that learn it will become imperative that we teach these systems in a way that is inherently positive while limiting unintended consequences. To do this will require us to raise A.I. as good citizens. Raising good A.I. will pose many of the same challenges as raising human children - teaching them right from wrong; providing a set of clear values; imparting knowledge without bias; and enabling self reliance while fostering community and collaboration. This is not a trivial task and one that business must take very seriously. Businesses that hesitate to consider their AIs as something that must be “raised” to maturity will be left struggling to catch up with new regulations and public demands —or worse, have strict regulatory controls placed upon the entire AI industry for failure of the group to take responsibility.

Learn how to raise good A.I. Citizens in Accenture’s Technology Vision 2018.

Elizabeth Gfoeller

Corporate Director, Regulatory Affairs

7 年

Thought provoking thanks Prej

要查看或添加评论,请登录

Prejlin Naidoo的更多文章

社区洞察

其他会员也浏览了