The ten most important inventions in Artificial Intelligence.
Dr. Priyanka Singh Ph.D.
Author - Gen AI Essentials ?? Transforming Generative AI ?? Responsible AI - Lead MLOps @ Universal AI ?? Championing AI Ethics & Governance ?? Top Voice | Empowering Future AI Solutions | Packt Technical Reviewer
“Artificial Intelligence” is presently the most up-to-date buzzword in tech. And with desirable reason - after decades of research and development, the final few years have viewed numerous techniques that have in the past been the hold of science fiction slowly radically change into science fact.
Already AI strategies are a deep part of our lives: AI determines our search results, interprets our voices into significant guidelines for computers, and can even help kind our cucumbers (more on that later). In the subsequent few years, we’ll use AI to pressure our cars, reply to our patron provider inquiries, and, well, limitless different things. But how did we get here? Where did this practical new technological know-how come from? Here are ten of the massive milestones that led us to these exciting times.
Getting the 'Big Idea'
The concept of AI didn't suddenly appear - it is the subject of a deep, philosophical debate that still rages today: Can a machine honestly think like a human? Can a machine be human? One of the first people to think about this was René Descartes, way back in 1637, in a book called Discourse on the Method. Amazingly, given at the time even an Amstrad Emailer would have seemed impossibly futuristic, Descartes summed up some of the crucial questions and challenges technologists would have to overcome: "If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men." He goes on to explain that in his view, machines could never use words or "put together signs" to "declare our thoughts to others," and that even if we could conceive of such a machine, "it is not conceivable that such a machine should produce different arrangements of words to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do."
He then goes on to describe the enormous challenge of now: creating a generalized AI rather than something narrowly focused - and how the limitations of current AI would expose how the machine is not a human:
"Even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs." So now, thanks to Descartes, when it comes to AI, we have the challenge.
The Imitation Game
The second primary philosophical benchmark came courtesy of computer science pioneer Alan Turing. In 1950 he first described what became known as The Turing Test, and what he referred to as "The Imitation Game" - a test for measuring when we can finally declare that machines can be intelligent.
His test was simple: if a judge cannot differentiate between a human and a machine (say, through a text-only interaction with both), can the machine trick the judge into thinking that they are the human one?
Amusingly at the time, Turing made a bold prediction about the future of computing - and he reckoned that his test would have had been passed by the end of the 20th century. He said: "I believe that in about fifty years it will be possible to program computers, with a storage capacity of about [1GB], to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Sadly his prediction is a little premature, as while we're starting to see some truly impressive AI now, back in 2000, the technology was much more primitive. At least he would have been impressed by hard disc capacity - which averaged around 10GB at the turn of the century.
The first Neural Network
“Neural Network” is the fancy name scientists give to trial and error, the critical thinking unpinning present-day AI. Essentially, when it comes to coaching an AI, the first-class way to do it is to have the device guess, acquire feedback, and bet again - continuously moving the possibilities that it will get to the correct answer.
What's quite splendid then is that the first neural community was once definitely created way again in 1951. Called "SNARC" - the Stochastic Neural Analogy Reinforcement Computer - was created by Marvin Minsky and Dean Edmonds. It was not made of microchips and transistors, however of vacuum tubes, motors, and clutches.
The assignment for this machine? Helping a digital rat clear up a maze puzzle. The gadget would send instructions to navigate the maze, and every time the consequences of its movements would be fed lower back into the widget - the vacuum tubes being used to save the outcomes. This intended that the computing device could study and shift the chances - leading to a greater danger of making it through the maze.
If truth be told, it's a very, very simple model of the similar technique Google uses to become aware of objects in images today.
The first self-driving car
When we think of self-driving cars, we think of something like Google’s Waymo project - but amazingly, way back in 1995, Mercedes-Benz managed to drive a modified S-Class mostly autonomously from Munich to Copenhagen.
According to auto evolution, the 1043 mile ride was made via stuffing effectively a supercomputer into the boot - the automobile contained 60 transputer chips, which at the time were the kingdom of the artwork when it came to parallel computing, that means that it may want to system a lot of riding statistics shortly - a crucial section of making self-driving motors sufficiently responsive. The vehicle reached speeds of up to 115mph and was virtually pretty similar to autonomous automobiles of today, as it could overtake and read road signs.
Switching to statistics
Though neural networks had existed as a thought for some time (see above!), it wasn’t until the late Eighties when there was a significant shift amongst AI researchers from a “rules-based” method to one as an alternative based on facts - or machine learning. This means that as an alternative, attempt to build systems that imitate talent by attempting to divine the guidelines with the aid of which humans operate, as an alternative taking a trial-and-error approach and adjusting the possibilities primarily based on remarks tons better way to train machines to think. This is a significant deal - as it is, this idea underpins the top-notch matters that AI can do today.
Gil Press at Forbes argues that this change used to be heralded in 1988, as IBM’s TJ Watson Research Center published a paper called “A statistical method to language translation,” which is mainly speak about the use of machine gaining knowledge of to do precisely what Google Translate works today.
IBM fed into their gadget 2.2 tens of millions of pairs of sentences in French and English to train the system - and the corrections were all taken from transcripts of the Canadian Parliament, which publishes its data in each language - which sounds like a lot but is nothing in contrast to Google having the complete web at its disposal - which explains why Google Translate is so creepily true today.
Deep Blue beats Garry Kasparov.
Despite the shift in focus to statistical models, rules-based models were still in use - and in 1997, IBM was responsible for perhaps the most famous chess match of all time, as its Deep Blue computer bested world chess champion, Garry Kasparov - demonstrating how powerful machines can be.
The bout was virtually a rematch: in 1996, Kasparov bested Deep Blue 4-2. It used to be only in 1997 the machines bought the upper hand, prevailing two out of the six video games outright, and combat Kasparov to a draw in three more.
To a positive extent, Deep Blue’s Genius was illusory - IBM itself reckons that its computing device is not using Artificial Intelligence. Instead, Deep Blue uses a combination of brute pressure processing - processing thousands of possible moves every second. IBM fed the system with facts on lots of beforehand games, and each time the board modified with each movie, Deep Blue wouldn’t be gaining knowledge of anything new. Still, it would as a substitute be looking up how preceding grandmasters reacted in identical situations. “He’s playing the ghosts of grandmaster's past,” as IBM puts it.
Whether this counts as AI or no longer, though, what’s clear is that it was once indeed a substantial milestone and one that drew much interest not simply to the computational skills of computers but additionally to the discipline as a whole. Since the face-off with Kasparov, besting human players at games had come to be a significant, populist way of benchmarking computer Genius - as we saw once more in 2011 when IBM’s Watson machine handily trounced two of the game show Jeopardy’s fantastic players.
Machine Starts Talking - Siri
Natural language processing has long been a holy grail of synthetic intelligence - and integral if we’re ever going to have a world where humanoid robots exist or where we can bark orders at our units like in Star Trek.
And this is why Siri, which used to be constructed using the aforementioned statistical methods, was once so impressive. Created by using SRI International and even launched as a separate app on the iOS app store, it was rapidly acquired using Apple itself and deeply integrated into iOS: Today, it is one of the most excessive-profile fruits of computer learning, as it, along with equivalent merchandise from Google (the Assistant), Microsoft (Cortana), and of course, Amazon’s Alexa, has modified the way we have interaction with our units in a way that would have appeared impossible simply a few years earlier.
Today we take it for granted - however, you only have to ask all people who ever tried to use a voice to textual content software before 2010 to respect just how far we’ve come.
The ImageNet Challenge
Like voice recognition, picture awareness is every other most crucial assignment that AI is helping to beat. In 2015, researchers concluded for the first time that machines - in this case, two competing structures from Google and Microsoft - have been better at identifying objects in pictures than humans were, in over one thousand categories. These “deep learning” systems were successful in beating the ImageNet Challenge - assume something like the Turing Test, however, for image attention - and they are going to be essential if photograph cognizance is ever going to scale beyond human abilities.
GPUs make AI economical.
One of the big reasons AI is now such a big deal is because it is only over the last few years that the cost of crunching so much data has become affordable. According to Fortune, it was only in the late 2000s that researchers realized that graphical processing units (GPUs), which had been developed for 3D graphics and games, were 20-50 times better at deep learning computation than traditional CPUs. And once people realized this, the amount of available computing power vastly increased, enabling the cloud AI platforms that power countless AI applications today. So thanks, gamers. Your parents and spouses might not appreciate you spending so much time playing videogames - but AI researchers sure do.
AlphaGo and AlphaGoZero conquer all.
In March 2016, another AI milestone was reached as Google’s AlphaGo software beat Lee Sedol, a top-ranked player of the board game Go, echoing Garry Kasparov’s historic match.
What made it substantial was not simply that Go is an even different mathematically complex sport than Chess; however, that it was skilled using a combination of human and AI opponents. Google received 4 out of five of the matches via reportedly using 1920 CPUs and 280 GPUs.
Perhaps even extra giant is information from a remaining year - when a later version of the software, AlphaGo Zero. Instead of the usage of any previous data, as AlphaGo and Deep Blue had, to research the sport, it undoubtedly played hundreds of matches towards itself - and after three days of coaching, was capable of beating the version of AlphaGo which beat Lee Sedol one hundred video games to nil. Who wants to train a computing device to be smart when a machine can instruct itself?
MBA, Engineer | Enterprise AI | Advanced Analytics | GTM Strategy | World's First Arbor Essbase Post-Sales Consultant
1 年Thank you for sharing Priyanka!
student at mvgr college of engineering (CSM)
1 年Hi madam,I don't have any thoughts to share with you because I am an undergraduate.I request you to advise me to plan my career in AI.As the technology growing up enormously, I want to discover something which is useful to society.To discover something what can I do i.e, which programming languages to learn,what are the online platforms to follow, which I have to know. I am hereby waiting for your response.Thank you madam .
Senior AI/ML Consultant at IQVIA | ML & AI Mentor/Advisor at NUWE | Criminologist | 3X Cloud Data Scientist (Azure, AWS, IBM)
3 年Great briefing of the top remarkable events in the history of AI. Interesting, informative and well written Dr. Priyanka Singh Ph.D., PMP, PGP-DSBA, MCA.
Defence Consultant | Cyber Intelligence | Information Security | Electronic Warfare | Business Growth | Experiential Leadership | Mentorship
3 年Wow..... that is an amazing insight into the evolution of AI as a futuristic technology. Well articulated Dr. Priyanka Singh Ph.D., PMP, PGP-DSBA, MCA.